Charlotte Hu | Popular Science https://www.popsci.com/authors/charlotte-hu/ Awe-inspiring science reporting, technology news, and DIY projects. Skunks to space robots, primates to climates. That's Popular Science, 145 years strong. Thu, 19 Oct 2023 22:00:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.popsci.com/uploads/2021/04/28/cropped-PSC3.png?auto=webp&width=32&height=32 Charlotte Hu | Popular Science https://www.popsci.com/authors/charlotte-hu/ 32 32 Finally, a smart home for chickens https://www.popsci.com/technology/smart-home-for-chickens-coop/ Thu, 19 Oct 2023 22:00:00 +0000 https://www.popsci.com/?p=581394
rendering of coop structure in grass
Coop

This startup uses an "AI guardian" named Albert Eggstein to count eggs and keep an eye on nearby predators.

The post Finally, a smart home for chickens appeared first on Popular Science.

]]>
rendering of coop structure in grass
Coop

For most Americans, eggs matter a lot. In a year, an average American is estimated to eat almost 300 eggs (that’s either in the form of eggs by themselves or in egg-utilizing products like baked goods). We truly are living in what some researchers have called the Age of the Chicken—at least geologically, the humble poultry will be one of our civilization’s most notable leftovers.

Food systems in the US are fairly centralized. That means small disruptions can ratchet up to become large disturbances. Just take the exorbitant egg prices from earlier this year as one example. 

To push back against supply chain issues, some households have taken the idea of farm to table a step further. Demand for backyard chickens rose both during the pandemic, and at the start of the year in response to inflation. But raising a flock can come with many unseen challenges and hassles. A new startup, Coop, is hatching at exactly the right time. 

[Related: 6 things to know before deciding to raise backyard chickens]

Coop was founded by AJ Forsythe and Jordan Barnes in 2021, and it packages all of the software essentials of a smart home into a backyard chicken coop. 

Agriculture photo
Coop

Barnes says that she can’t resist an opportunity to use a chicken pun; it’s peppered into the copy on their website, as well as the name for their products, and is even baked into her title at the company (CMO, she notes, stands for chief marketing officer, but also chicken marketing officer). She and co-founder Forsythe invited Popular Science to a rooftop patio on the Upper East side to see a fully set up Coop and have a “chick-chat” about the company’s tech. 

In addition to spending the time to get to know the chickens, they’ve spent 10,000 plus hours on the design of the Coop. Fred Bould, who had previously worked on Google’s Nest products, helped them conceptualize the Coop of the future

The company’s headquarters in Austin has around 30 chickens, and both Barnes and Forsythe keep chickens at home, too. In the time that they’ve spent with the birds, they’ve learned a lot about them, and have both become “chicken people.” 

An average chicken will lay about five eggs a week, based on weather conditions and their ranking in the pecking order. The top of the pecking order gets more food, so they tend to lay more eggs. “They won’t break rank on anything. Pecking order is set,” says Barnes. 

Besides laying eggs, chickens can be used for composting dinner scraps. “Our chickens eat like queens. They’re having sushi, Thai food, gourmet pizza,” Barnes adds.  

Agriculture photo
Coop

For the first generation smart Coop, which comes with a chicken house, a wire fence, lights that can be controlled remotely, and a set of cameras, all a potential owner needs to get things running on the ground are Wifi and about 100 square feet of grass. “Chickens tend to stick together. You want them to roam around and graze a little bit, but they don’t need sprawling plains to have amazing lives,” says Barnes. “We put a lot of thought into the hardware design and the ethos of the design. But it’s all infused with a very high level of chicken knowledge—the circumference of the roosting bars, the height of everything, the ventilation, how air flows through it.” 

[Related: Artificial intelligence is helping scientists decode animal languages]

They spent four weeks designing a compostable, custom-fit poop tray because they learned through market research that cleaning the coop was one of the big barriers for people who wanted chickens but decided against getting them. And right before the Coop was supposed to go into production a few months ago, they halted it because they realized that the lower level bars on the wire cage were wide enough for a desperate raccoon to sneak their tiny paws through. They redesigned the bars with a much closer spacing. 

The goal of the company is to create a tech ecosystem that makes raising chickens easy for the beginners and the “chicken-curious.” And currently, 56 percent of their customers have never raised chickens before, they say.

Agriculture photo
Coop

Key to the offering of Coop is its brain: an AI software named Albert Eggstein that can detect both the chickens and any potential predators that might be lurking around. “This is what makes the company valuable,” says Barnes. Not only can the camera pick up that there’s four chickens in the frame, but it can tell the chickens apart from one another. It uses these learnings to provide insights through an accompanying app, almost like what Amazon’s Ring does. 

[Related: Do all geese look the same to you? Not to this facial recognition software.]

As seasoned chicken owners will tell newbies, being aware of predators is the name of the game. And Coop’s software can categorize nearby predators from muskrats to hawks to dogs with a 98-percent accuracy. 

“We developed a ton of software on the cameras, we’re doing a bunch of computer vision work and machine learning on remote health monitoring and predator detection,” Forsythe says. “We can say, hey, raccoons detected outside, the automatic door is closed, all four chickens are safe.”

Agriculture photo
Coop

The system runs off of two cameras, one stationed outside in the run, and one stationed inside the roost. In the morning, the door to the roost is raised automatically 20 minutes after sunrise, and at night, a feature called nest mode can tell owners if all their chickens have come home to roost. The computer vision software is trained through a database of about 7 million images. There is also a sound detection software, which can infer chicken moods and behaviors through the pitch and pattern of their clucks, chirps, and alerts.

[Related: This startup wants to farm shrimp in computer-controlled cargo containers]

It can also condense the activity into weekly summary sheets, sending a note to chicken owners telling them that a raccoon has been a frequent visitor for the past three nights, for example. It can also alert owners to social events, like when eggs are ready to be collected.  

A feature that the team created called “Cluck talk,” can measure the decibels of chicken sounds to make a general assessment about whether they are hungry, happy, broody (which is when they just want to sit on their eggs), or in danger. 

Agriculture photo
Coop

There’s a lot of chicken-specific behaviors that they can build models around. “Probably in about 6 to 12 months we’re going to roll out remote health monitoring. So it’ll say, chicken Henrietta hasn’t drank water in the last six hours and is a little lethargic,” Forsythe explains. That will be part of a plan to develop and flesh out a telehealth offering that could connect owners with vets that they can communicate and share videos with. 

The company started full-scale production of their first generation Coops last week. They’re manufacturing the structures in Ohio through a specialized process called rotomolding, which is similar to how Yeti coolers are made. They have 50 beta customers who have signed up to get Coops, and are offering an early-bird pricing of $1,995. Like Peloton and Nest, customers will also have to pay a monthly subscription fee of $19.95 for the app features like the AI tools. In addition to the Coops, the company also offers services like chicken-sitting (aptly named chicken Tenders). 

For the second generation Coops, Forsythe and Barnes have been toying with new ideas. They’re definitely considering making a bigger version (the one right now can hold four to six chickens), or maybe one that comes with a water gun for deterring looming hawks. The chickens are sold separately.

The post Finally, a smart home for chickens appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This non-invasive device blasts apart tumors with sound waves https://www.popsci.com/technology/histosonics-tumor-sound-wave-fda-approval/ Sat, 14 Oct 2023 11:00:00 +0000 https://www.popsci.com/?p=579625
HistoSonics' tumor destroying device.
HistoSonics' tumor destroying device. Erica Bass, Rogel Cancer Center, Michigan Medicine

The tech recently received FDA approval, and will soon be available as a treatment option for patients in the US.

The post This non-invasive device blasts apart tumors with sound waves appeared first on Popular Science.

]]>
HistoSonics' tumor destroying device.
HistoSonics' tumor destroying device. Erica Bass, Rogel Cancer Center, Michigan Medicine

This week, the US Food and Drug Administration gave the green light to a device that uses ultrasound waves to blast apart tumors in the liver. This technique, which requires no needles, injections, knives, or drugs, is called histotripsy, and it’s being developed by a company called HistoSonics, founded by engineers and doctors from the University of Michigan in 2009. 

According to a press release, this approval comes after the results of a series of clinical trials indicated that it can effectively destroy liver tumors while being safe for patients. Now hospitals can purchase the device and offer it to patients as a treatment option. The machine works by directing targeted pulses of high-energy ultrasound waves at a tumor, which creates clusters of microbubbles inside it. When the bubbles form and collapse, they stress the cells and tissues around them, allowing them to break apart the tumor’s internal structure, leaving behind scattered bits that the immune system can then come in to sweep up. 

Here’s the step-by-step process: After patients are under anesthesia, a treatment head that looks uncannily like a pair of virtual reality goggles is placed over their abdomen. Clinicians toggle through a control screen to look at and locate the tumor. Then they lock and load the sound waves. The process is reportedly fast and painless, and the recovery period after the procedure is short.

Through a paired imaging machine, clinicians can also see that the sound waves are targeted at the tumor while avoiding other parts of the body. A robotic arm can also move the transducer to get better aim at the tumor region. In this process, the patient’s immune system can also learn to recognize the tumor cells as threats, which prevented recurrence or metastasis in 80 percent of mice subjects.

While the approval of the device is a big step for broadening the options for cancer treatments, the use of sound waves in medicine is not new. Another platform called Exablate Prostate by Insightech was cleared by the FDA for human trials in prostate cancer patients (although clearance is not quite the same thing as an approval). Nonetheless, the results have been encouraging. The histotripsy technique is being applied in many preclinical experiments for tumors outside of the brain, such as in renal cancer, breast cancer, pancreatic cancer, and musculoskeletal cancer. 

Beyond tumors, a similar technique called lithotripsy, which uses shock waves, has been a treatment for breaking apart painful kidney stones so they become small enough for patients to pass. 

Watch the device at work below:

The post This non-invasive device blasts apart tumors with sound waves appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Why NASA will launch rockets to study the eclipse https://www.popsci.com/technology/nasa-sounding-rocket-eclipse/ Fri, 13 Oct 2023 14:00:00 +0000 https://www.popsci.com/?p=579300
The progression of a solar eclipse over Oregon.
The progression of a solar eclipse over Oregon. NASA

Solar events like this can stir up particles in the Earth's ionosphere and disrupt radio frequency communications.

The post Why NASA will launch rockets to study the eclipse appeared first on Popular Science.

]]>
The progression of a solar eclipse over Oregon.
The progression of a solar eclipse over Oregon. NASA

An annular “ring of fire” eclipse is always a bewitching event. This year, timed just right to herald in spooky season, the October 14th solar spectacular will cut a path of near darkness in the Western hemisphere through Oregon, Texas, Central America, Colombia, and northern Brazil. 

Eclipses can be more than just emotionally stirring. Solar eclipses, when they happen, create waves of disturbances across electrically charged particles in the Earth’s ionosphere—a layer of the upper atmosphere that plays an important role in radio frequency communications. Here, the heated and charged ions and electrons swirl around in a soup of plasma that envelops the planet. 

To understand the effect that eclipses have on this plasma, scientists from NASA are planning to shoot a series of 60-feet-tall rockets up to collect information at the source.

The ionosphere sits between 60-300 kilometers above the Earth’s surface, which is roughly 37-190 miles up. “The only way to study between 50 kilometers and 300 kilometers in situ is through rockets,” says Aroh Barjatya, director of the Space and Atmospheric Instrumentation Lab and principal investigator on the upcoming NASA sounding rocket mission, which is called Atmospheric Perturbations around the Eclipse Path. By in situ, he means quite literally in the thick of it. 

[Related: A new satellite’s “plasma brake” uses Earth’s atmosphere to avoid becoming space junk]

“Satellites, which are flying at 400 kilometers, can look down, but they cannot measure in the middle of the ionosphere. It can only be doing remote sensing,” he adds. “And the ground-based measurements are also remote sensing.” Rockets are a relatively low-cost way to get right into the ionosphere.

Along with the rockets, the team will be sending up high-altitude balloons that will measure the weather every 20 minutes. These balloons will cover the first 100,000 feet, or about 19 miles, above the ground. Then come the stars of the show: three sounding rockets fitted with both commercial and military surplus solid propellent rocket motors. The trio are designed to give a view of the changes in the ionosphere over time, and they will be launched directly into the shadow of the eclipse from a site at the White Sands facility in New Mexico. One of the rockets will be sent up right before the eclipse, one during, and one after. Because they’re sounding rockets, they will go up to the target height, and come back down, which means that they’re equipped with a parachute recovery system. 

Engineering photo
Mechanical technician John Peterson of NASA’s Wallops Flight Facility and APEP mission leader Aroh Barjatya check the sensors on the rocket. NASA’s Wallops Flight Facility/Berit Bland

“If you think of a big orbital vehicle sending a satellite up, they’re going to reach 14,000 miles/hour when they get into space. So they’re going to reach that orbital escape velocity and put their payload into orbit, and it’s going to stay up there for a long time,” Max King, deputy chief of the Sounding Rockets Program Office at NASA GSFC, Wallops Flight Facility, explains. “Ours are what we call suborbital. So they go up, but by the time we’ve gotten into space, we’ve slowed down to zero, and start falling back into the atmosphere. Over that curved trajectory, we get about 10 minutes in [the ionosphere] where we can take measurements and conduct science.” 

[Related: We can predict solar eclipses to the second. Here’s how.]

Ten minutes may not seem very long. But a lot of data can be gathered during that time. As the rockets reach the ionosphere, electrostatic probes will pop out, measuring plasma temperature, density, as well the surrounding electric and magnetic fields. There’s a telemetry system that sends data back to the ground continuously. 

The main objective of the mission is to study the plasma dynamics during the eclipse that can impact radio frequency communications. Any sort of unexpected turbulence can disrupt signals to a satellite, GPS, ham radio operators, or over-the-horizon radar that the military uses. “Ionosphere is the thing which bounces radio frequencies, and all of the space communications go through the ionosphere,” Barjatya says. 

After the October mission, they’ll search the desert for the fallen parts of the rockets and refurbish the remnants of them for a second set of launches in April 2024 during the next eclipse, just so they can study its effects on the ionosphere a bit further out from the direct path. Getting more details about what happens to the ionosphere when the sun is suddenly blotted out will give researchers insight into what radio frequencies get affected, and how widespread the disturbance is. It will allow models to better prepare for these potential disruptions in the future. 

24-0006 NASA TBB Solar Eclipse Missions Barjatya
The APEP team prepping for launch.  Army/Judy K Hawkins

NASA has launched quite a few rockets during eclipses. The last big campaign that NASA did was in 1970, where they launched 25 rockets in 15 minutes. “In 1970 the eclipse went right above the Wallops facility [in Virginia],” Barjatya says. But those rockets were mostly meteorological rockets. Today’s rockets each contain four small payloads filled with scientific instruments. “One rocket launch gives me five measurements at the same time,” he adds. “So one rocket of today is actually equal to five rockets of 1970.” 

These rockets are not specialized for only glimpsing at the sky during eclipses. In fact, NASA uses them in about 20 missions a year, worldwide. “We go where the science is,” King says. Sounding rockets can be used to launch telescopes for spying on celestial bodies, supernovas, star clusters, or even flares and emissions from our own sun. 

The main launch sites in North America are at the Wallops facility in Virginia, and the White Sands facility in New Mexico. Outside of the US, Norway is also a big launch site. There, scientists are using them to observe Northern lights and other auroral phenomena. Or, they could be used to take a gander at something called the cusp region, the closest portal in the sky to near-Earth space. “The cusp region is where the magnetic field lines all come into the same point,” King notes. “The only way you can really study that is to shoot a rocket through it.”

The agency will be live-streaming the launches, which you can watch here.

The post Why NASA will launch rockets to study the eclipse appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The world’s most powerful computer could soon help the US build better nuclear reactors https://www.popsci.com/technology/argonne-exascale-supercomputer-nuclear-reactor/ Fri, 06 Oct 2023 14:00:00 +0000 https://www.popsci.com/?p=577553
aurora supercomputer at Argonne
Argonne National Laboratory

Here’s how engineers will use it to model the complex physics inside the heart of a nuclear power plant.

The post The world’s most powerful computer could soon help the US build better nuclear reactors appeared first on Popular Science.

]]>
aurora supercomputer at Argonne
Argonne National Laboratory

Argonne National Laboratory in Lemont, Illinois, is getting a new supercomputer, Aurora, which its scientists will use to study optimal nuclear reactor designs. As of now, the lab is using a system called Polaris, a 44-petaflops machine that can perform about 44 quadrillion calculations per second. 

Aurora, which is currently being installed, will have more than 2 exaflops of computing power, giving it the capacity to do 2 quintillion calculations per second—almost 50 times as many as the old system. Once the unprecedented machine comes online, it’s expected to lead the TOP500 list that ranks the most powerful computers in the world. It was expected to start running earlier, but has had delays due to manufacturing issues

A more powerful supercomputer means that nuclear scientists can simulate the fundamental physics underlying the reactions with as much detail as possible, which will allow them to make better assessments of overall safety and efficiency of new reactor designs. Reactors are the heart of a nuclear power plant. Here, a process called fission happens, leading to a series of nuclear chain reactions that produce incredible levels of heat, which is used to turn water into steam to spin a turbine that then creates electricity.

“Anyone out there that’s actively designing a reactor is going to use what we call ‘faster running tools’ that will look at things on a system-level scale and make approximations for the reactor core itself,” Dillon Shaver, principal nuclear engineer at Argonne National Laboratory, tells Popsci. “[At Argonne] we are doing as close to the fundamental physical calculations as possible, which requires a huge amount of resolution and a huge amount of unknowns. It translates into a huge amount of computation power.”

Shaver’s job, in a nutshell, is to do the math that prevents reactors from melting down. That involves a deep understanding of how different types of coolant liquids behave, how fluid flows around the different reactor components, and what kind of heat transfer occurs. 

[Related: Why do nuclear power plants need electricity to stay safe?]

According to the Department of Energy, “all commercial nuclear reactors in the US are light-water reactors. This means they use normal water as both a coolant and neutron moderator.” And most active light-water reactors have a fuel pin geometry design, where large arrays of fuel pins (large tubes that contain the fuel, usually uranium, needed for fission reactions) are arranged in a rectangular lattice.

The next generation of reactor designs that Shaver and his team are investigating include wire-wrapped liquid metal fast reactors. The reactors are placed in a triangular lattice instead of a rectangular one, and are also layered with a thin wire that forms a kind of helix around the fuel pin. “This leads to some really complicated flow behavior because the [liquid metals like sodium] has to move around that wire and usually causes a spiral pattern to develop. That has some interesting implications on heat transfer,” Shaver explains. “A lot of time it enhances it, which is a very desirable thing” because it’s able to get more power out of a limited amount of fuel.  

However, with the advanced designs like the wire wrap, “it’s a little bit more complicated to pump the fluid around these wires compared to just an open model,” he adds, which means that it could take more input energy too.  

Pebble bed nuclear reactor diagram
An illustration of the inside of a pebble bed reactor. Argonne National Laboratory

Another popular option is called a pebble bed reactor, which involves a series of graphite pebbles about the size of a tennis ball being embedded with the nuclear fuel. “You just randomly pat them into an open container and let fluid flow around them,” Shaver says. “That is a very different scenario compared to what we’re used to with light-water reactors because now all of the fluid can move through these random spaces between the pebbles.” Such a system has many benefits for low-energy cooling

With the newly proposed designs, the goal is to ultimately generate more power while putting less in. “You’re trying to enhance the heat transfer you get from it, and the price you pay is how much energy it takes to pump it,” says Shaver. “There’s an interesting cost-benefit there.” Some of the tradeoffs can be significant, and these supercomputer simulations promise to give more accurate numbers than ever, allowing upcoming nuclear power plants to work with reactors that are as efficient and safe as possible. 

The post The world’s most powerful computer could soon help the US build better nuclear reactors appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Do all geese look the same to you? Not to this facial recognition software. https://www.popsci.com/technology/facial-recognition-geese/ Wed, 04 Oct 2023 18:00:00 +0000 https://www.popsci.com/?p=577107
greylag goose
S Kleindorfer / Konrad Lorenz Research Centre for Behaviour and Cognition, University of Vienna

Here's how scientists are using this tech on animal research.

The post Do all geese look the same to you? Not to this facial recognition software. appeared first on Popular Science.

]]>
greylag goose
S Kleindorfer / Konrad Lorenz Research Centre for Behaviour and Cognition, University of Vienna

Even though we can’t tell a flock of birds apart without examining them closely, the birds in the group know who’s who. And that’s because they have certain physical marks that help distinguish them. 

Just like how individual humans might have distinct moles, or other unique physical characteristics, Greylag geese have unique grooves on their beaks. To prove that Greylag geese do indeed have distinctive facial features, a team of scientists from Flinders University in Australia and University of Vienna in Austria developed facial recognition software that can assign a goose face to a goose ID within a database with around 97 percent accuracy. 

“Results from the facial recognition software showed that identification of individual geese using images of their bill was possible and validated the idea that geese are visually unique,” the researchers wrote in a paper they published last month in the Journal of Ornithology.

[Related: What’s life like for a fruit fly? AI offers a peek.]

But of course a computer accuracy test can only prove so much. To test if geese can recognize each other by their faces (and not by some other feature such as smell or sound), scientists took photos of individuals within a group of Greylag geese and tested how other members of the flock reacted to the 2D-printed images. 

As part of their experiment, the researchers blew up these photos into life-size portrayals that they then put in front of the real geese. When presented with a photo of themselves, their partner, and a flock mate, these geese gravitated towards the photos of their partner, and actually hissed at photos of themselves. (Because geese don’t own mirrors, they don’t know what they look like, and therefore when they see themselves for the first time, they register it as an unfamiliar goose.) 

[Related: Artificial intelligence is helping scientists decode animal languages]

Facial recognition is a complicated technology in the human world. It doesn’t help that it’s getting more commonplace. While it can be more convenient than typing in a passcode on your phone, or keeping track of a key, mistakes happen, privacy problems arise, and the technology itself is still fairly unreliable.

But in the animal world, it has the potential to help. Petco, for example, is using facial recognition for pets as the backbone of its lost pets database. Owners can upload photos, and the software will scan for image matches at nearby shelters. 

For natural scientists and conservationists, this type of software can help them keep track of individual animals by seeing who’s passing by what trail cams or camera traps. Different animals have different tells. For tigers, the differentiator is their stripes. For other animals like bears or pumas, researchers may have to rely more on body markings. And for farm animals like sheep, cows, and pigs, scientists want to use the technology to monitor their daily behaviors and overall well-being. But in that case, questions remain on who the data is really for: the animals or the humans?

The post Do all geese look the same to you? Not to this facial recognition software. appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
AI narrators will read classic literature to you for free https://www.popsci.com/technology/ai-reads-audiobooks/ Mon, 02 Oct 2023 11:00:00 +0000 https://www.popsci.com/?p=576188
old books in a pile
Deposit Photos

Synthetic voices can take old texts such as "Call of the Wild" and narrate them on platforms like Spotify. Here's how it works—and how to listen.

The post AI narrators will read classic literature to you for free appeared first on Popular Science.

]]>
old books in a pile
Deposit Photos

Recording an audiobook is no easy task, even for experienced voice actors. But demand for audiobooks is on the rise, and major streaming platforms like Spotify are making dedicated spaces for them to grow into. To fuse innovation with frenzy, MIT and Microsoft researchers are using AI to create audiobooks from online texts. In an ambitious new project, they are collaborating with Project Gutenberg, the world’s oldest and probably largest online repository of open-license ebooks, to make 5,000 AI-narrated audiobooks. This collection includes classic titles in literature like Pride and Prejudice, Madame Bovary, Call of the Wild, and Alice’s Adventures in Wonderland. The trio published an arXiv preprint on their efforts in September. 

“What we wanted to do was create a massive amount of free audiobooks and give them back to the community,” Mark Hamilton, a PhD student at the MIT Computer Science & Artificial Intelligence Laboratory and a lead researcher on the project, tells PopSci. “Lately, there’s been a lot of advances in neural text to speech, which are these algorithms that can read text, and they sound quite human-like.”

The magic ingredient that makes this possible is a neural text-to-speech algorithm which is trained on millions of examples of human speech, and then it’s tasked to mimic it. It can generate different voices with different accents in different languages, and can create custom voices with only five seconds of audio. “They can read any text you give them and they can read them incredibly fast,” Hamilton says. “You can give it eight hours of text and it will be done in a few minutes.”

Importantly, this algorithm can pick up on the subtleties like tones and the modifications humans add when reading words, like how a phone number or a website is read, what gets grouped together, and where the pauses are. The algorithm is based off previous work from some of the paper’s co-authors at Microsoft. 

Like large language models, this algorithm relies heavily on machine learning and neural networks. “It’s the same core guts, but different inputs and outputs,” Hamilton explains. Large language models take in text and fill in gaps. They use that basic functionality to build chat applications. Neural text-to-speech algorithms, on the other hand, take in text, pump them through the same kinds of algorithms, but now instead of spitting out text, they’re spitting out sound, Hamilton says.

[Related: Internet Archive just lost a federal lawsuit against big book publishers]

“They’re trying to generate sounds that are faithful to the text that you put in. That also gives them a little bit of leeway,” he adds. “They can spit out the kind of sound they feel is necessary to solve the task well. They can change, group, or alter the pronunciation to make it sound more humanlike.” 

A tool called a loss function can then be used to evaluate whether a model did a good job, a bad job. Implementing AI in this way can speed up the efforts of projects like Librivox, which currently uses human volunteers to make audiobooks of public domain works.

The work is far from done. The next steps are to improve the quality. Since Project Gutenberg ebooks are created by human volunteers, every single person who makes the ebook does it slightly differently. They may include random text in unexpected places, and where ebook makers place page numbers, the table of contents, or illustrations might change from book to book. 

“All these different things just result in strange artifacts for an audiobook and stuff that you wouldn’t want to listen to at all,” Hamilton says. “The north star is to develop more and more flexible solutions that can use good human intuition to figure out what to read and what not to read in these books.” Once they get that down, their hope is to use that, along with the most recent advances in AI language technology to scale the audiobook collection to all the 60,000 on Project Gutenberg, and maybe even translate them.

For now, all the AI-voiced audiobooks can be streamed for free on platforms such as Spotify, Google Podcasts, Apple Podcasts, and the Internet Archive.

There are a variety of applications for this type of algorithm. It can read plays, and assign distinct voices to each character. It can mock up a whole audiobook in your voice, which could make for a nifty gift. However, even though there are many fairly innocuous ways to use this tech, experts have previously voiced their concerns about the drawbacks of artificially generated audio, and its potential for abuse

Listen to Call of the Wild, below.

The post AI narrators will read classic literature to you for free appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
In pictures: 7 things from MoMA’s new design and architecture exhibits that made us go ‘wow’ https://www.popsci.com/technology/moma-emerging-ecologies-life-cycles/ Thu, 21 Sep 2023 19:00:00 +0000 https://www.popsci.com/?p=573288
Drawing of the Tsuruhama Rain Forest Pavilion project in Osaka, Japan.
Drawing of the Tsuruhama Rain Forest Pavilion project in Osaka, Japan. Cambridge Seven Associates

These projects showcase the wacky ways humans have experimented with sustainable materials and structures.

The post In pictures: 7 things from MoMA’s new design and architecture exhibits that made us go ‘wow’ appeared first on Popular Science.

]]>
Drawing of the Tsuruhama Rain Forest Pavilion project in Osaka, Japan.
Drawing of the Tsuruhama Rain Forest Pavilion project in Osaka, Japan. Cambridge Seven Associates

Humans have altered the natural environment in incredible and terrifying ways. They’ve been able to refine and harness elements of nature, creating new types of living spaces and usable objects in the process. Some of these innovations come at a cost. 

The way cities are built, and the makeup of materials that permeate everyday life have become detrimental to the health of animals and ecosystems alike. While whole-scale change is slow, two new exhibits at the Museum of Modern Art in New York City, Emerging Ecologies: Architecture and the Rise of Environmentalism, and Life Cycles: The Materials of Contemporary Design, are highlighting how architects, engineers, and designers are reimagining ways to transform natural resources and materials to address growing concerns around human impact on ecology and the environment. Here are some of our favorite projects.

[Related: The ability for cities to survive depends on smart, sustainable architecture]

Solar Sinter

Engineering photo
In 3D printing, powders are converted to a solid material. German designer Markus Kayser came up with a machine called the solar sinter in 2011 that can harness power from the sun to turn desert sand into glass. Kayser has used this technique to make objects like bowls. Credit: Markus Kayser

Cow dung lamps

Engineering photo
Cow dung has previously been dubbed by The Guardian as the “planet’s prodigious poo problem,” and that’s because there’s massively more waste being generated than there are ways to deal with it. The problem is that it toxifies nearby ecosystems. Indonesian designer Adhi Nugara dares to imagine a second life for this waste. Add some glue and electronics, and cow dung can be fashioned into lamps, speakers, chairs, and more. Credit: Studio Periphery

Algae-based biopolymers

Engineering photo
Algae are marine organisms that act like the plants of the sea, and seaweed is a form of macroalgae. Seaweed takes up carbon dioxide as it grows, and they can be converted into plastic substitutes, binders, fibers, and pigments. European designers have set up labs to test new reduced carbon products and materials made from local seaweed. Credit: Atelier Luma / Luma Arles, Eric Klarenbeek, Maartje Dros, Studio Klarenbeek & Dros.

Liquid-printed lights and bags

Engineering photo
MIT scientists have come up with a way to 3D print objects in gel to eliminate the challenges that can come with gravity being present. This way, materials like rubber, foam, plastic, and more can quickly settle into their intended forms. Some silicone structures can even be blown up like a balloon to attain their final shape. Credit: Christophe Guberan/MIT Self-Assembly Lab.

Thermoheliodon

Engineering photo
The Thermoheliodon, made at Princeton University in 1956, was a small, domed insular test bed for architectural models. The idea was that it would allow architects to understand how different designs would interact with the temperatures and climates of the surrounding environment as it heated up and cooled down. While it had its flaws, it inspired early principles around bioclimatic design (think good air flow and low energy use). Credit: Guy Gillette

The National Fisheries Center and Aquarium project that never was

Engineering photo
There was a grand plan in 1966 to have a theatrical national fisheries center and aquarium in DC along the Potomac. Blueprints were drawn up, models were made. It would’ve had marine exhibits, laboratories, and even a greenhouse to mimic the ecologies of the Everglades and coastal tide pools. The project was approved for construction, but was ultimately abandoned when President Nixon put a freeze on federal spending. Credit: Charlotte Hu

Emilio Ambasz’s green architecture

Engineering photo
The Prefectural International Hall and Lucile Halsell Conservatory look like a human version of Hobbiton. These buildings are covered in greenery and take the physics of the natural world into account to minimize energy use. Singapore is applying similar methods across its city to reduce the worsening heat island effects of climate change. Credit: Hiromi Watanabe

Life Cycles: The Materials of Contemporary Design is on view until July 07, 2024. 

Emerging Ecologies: Architecture and the Rise of Environmentalism is on view until January 20, 2024.

The post In pictures: 7 things from MoMA’s new design and architecture exhibits that made us go ‘wow’ appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Why humans feel bad for awkward robots https://www.popsci.com/technology/human-robot-embarrassment/ Wed, 20 Sep 2023 18:30:00 +0000 https://www.popsci.com/?p=572966
grimace face awkward smiley
Bernard Hermant / Unsplash

Secondhand embarrassment is related to empathy.

The post Why humans feel bad for awkward robots appeared first on Popular Science.

]]>
grimace face awkward smiley
Bernard Hermant / Unsplash

When someone does something cringey, it’s only human nature to feel embarrassed for them. If a friend slips and falls on a wet floor, it makes sense to feel self-conscious on their behalf. It’s a sign of empathy, according to science, and it determines how people cooperate, connect, and treat one another. What happens, though, when the second person in this situation is replaced with a robot?

Experiencing secondhand embarrassment lights up areas in the human brain associated with pain and the recognition of emotions. In that vein, social anxiety is linked to heightened empathy, but also comes with a reduced capacity to actually understand the other person’s emotions, known as cognitive empathy. And of course, the more socially close and invested a person is in another, the more acutely they’ll feel this bystander discomfort. 

Interestingly, new research from Toyohashi University of Technology in Japan found that humans can have the same sort of secondhand embarrassment when they see a robot commit a social faux pas. A detailed report was published in the journal Scientific Reports last week. 

To test this phenomenon, human subjects were immersed in a virtual environment where both human and robot avatars were present. The researchers then put these avatars, both the ones representing humans and the ones depicting bots, through awkward situations like stumbling in a crowd, running into a sliding door, or dancing clumsily in public. 

Researchers then measured skin conductance, or the electrical activity of the sweat glands, of the subjects. This correlates to arousal signals like stress, or other states of high emotion. Participants also filled out a questionnaire about their emotional responses to each virtual social situation. 

[Related: Do we trust robots enough to put them in charge?]

The data indicates that humans felt self-embarrassment for both the human and robot avatars when they were in a socially awkward scenario, although they perceived the situation as more “real” for the human avatar compared to the robot.  

Still, the team says that the results show that “humans can empathize with robots in embarrassing situations, suggesting that humans assume the robots can be aware of being witnessed and have some degree of self-consciousness based on self-reflection and self-evaluation,” they wrote in the paper. But it also matters what the robot looks like: “The appearance of the robot may affect the empathic embarrassment because humans empathize more strongly with more human-looking robots and less with more mechanical-looking robots when they are mistreated by humans.”

Previous research into this area has turned up similar themes. Last year, a study out of France found that humans would unconsciously sync their movements with that of humanoid robots, as a bid to fit in socially. And imbuing robot speech with more emotional undertones make them more acceptable to humans

Despite the interesting findings in this recent study, the team from Toyohashi University of Technology acknowledges that a larger sample size, as well as real-world humans and robots, would make the conclusions more convincing. 

“Our study provides valuable insights into the evolving nature of human-robot relationships. As technology continues to integrate into our daily lives, understanding the emotional responses we have towards robots is crucial,” Harin Hapuarachchi, the lead researcher on the project, said in a press release. “This research opens up new avenues for exploring the boundaries of human empathy and the potential challenges and benefits of human-robot interactions.”

The post Why humans feel bad for awkward robots appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Honda’s new Motocompacto e-scooter looks like a rideable suitcase https://www.popsci.com/technology/honda-motocompacto/ Sat, 16 Sep 2023 11:00:00 +0000 https://www.popsci.com/?p=570949
honda's motocompacto e-scooter visualized
A clear view of the inside of the Motocompacto. Honda

It weighs less than 50 pounds and has a zero-emissions range of 12 miles.

The post Honda’s new Motocompacto e-scooter looks like a rideable suitcase appeared first on Popular Science.

]]>
honda's motocompacto e-scooter visualized
A clear view of the inside of the Motocompacto. Honda

Honda’s newest electric scooter, called the Motocompacto, looks like a roving suitcase. Announced this week, the Motocampacto is designed to be foldable and lightweight, meaning that the handlebars and seat can be tucked into the suitcase-shaped main body of the vehicle. The idea is that this should make it easy to transport and easy to store. According to Honda, it should be able to reach “a maximum speed of 15 mph and [has a] zero-emissions range of up to 12 miles.” 

Additionally, Motocompacto “can be fully charged in just 3.5 hours in both the folded and ready-to-ride configuration using a common 110 v outlet.” At a price tag of $995, it will be available for purchase later this year. 

The Motocompacto is an updated, electric take on an early ’80s Honda design called the Motocompo. Similarly, the Motocompo was also a collapsible scooter, but it looked more like a handyman’s duffle bag in its condensed form. The intent was that it could fit into the cargo space of the Honda City kei car, presumably so it could serve as a final mile solution, where it carried people to their destination beyond where they were allowed to park a car. 

[Related: Electric cars are better for the environment, no matter the power source]

While Motocompacto should be able to carry out the same jobs as the original model, it comes with more modern conveniences. “Motocompacto is perfect for getting around cityscapes and college campuses. It was designed with rider comfort and convenience in mind with a cushy seat, secure grip foot pegs, on-board storage, a digital speedometer, a charge gauge and a comfortable carry handle,” Honda said in the press release. “A clever phone app enables riders to adjust their personal settings, including lighting and ride modes, via Bluetooth.”

Electric Vehicles photo
Honda’s Motocompacto in motion. Honda

Its wheels and frames are made with heat-treated aluminum, and it has bright LED headlight and taillight, side reflectors, and “a welded steel lock loop on the kickstand that is compatible with most bike locks.” It weighs around 41 lbs, comparable with how heavy a carry-on suitcase typically is. 

[Related: BMW’s electric scooter will hit 75 mph and has motorcycle vibes]

The vehicle is part of Honda’s larger goal to release more electric models of its fleet by 2030, and to sell only electric or fuel cell models by 2040. It joins other major carmakers around the world, like GM, Ford, Hyundai, Volvo, and more, which are all committing to lowering global carbon emissions by offering more new EVs as options for consumers. The Motocompacto is set to be sold in conjunction with the company’s newest lineup of all-electric SUVs, Jane Nakagawa, vice president of the R&D Business Unit at American Honda Motor Co., said in a statement. “Motocompacto supports our goal of carbon neutrality by helping customers with end-to-end zero-emissions transport.” 

Watch an intro video to the Motocompacto below:

The post Honda’s new Motocompacto e-scooter looks like a rideable suitcase appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
New tool for finding lost pets is a low-tech twist on old-school tags https://www.popsci.com/technology/amazon-ring-pet-tag/ Thu, 14 Sep 2023 22:00:00 +0000 https://www.popsci.com/?p=570564
beagle
Ring brings in modern pet tags. Marcus Wallis / Unsplash

Ring is adding QR codes to traditional pet tags.

The post New tool for finding lost pets is a low-tech twist on old-school tags appeared first on Popular Science.

]]>
beagle
Ring brings in modern pet tags. Marcus Wallis / Unsplash

For people whose pets are sneaky little escape artists, there are a number of ways to use modern technology to find out where their four-legged friend went. Some owners have even started attaching an AirTag to their furry companions to track their location (although Apple, AirTag’s maker, as well as vets, discourage owners from using these devices on their pets). 

A new accessory from Amazon’s Ring, called Pet Tag, could serve as a new tool in this arsenal. It works like an intermediary between a traditional dog tag and a microchip, and will be available for purchase in early October. 

On an engraved name tag, there’s only so much information owners can put. Most tags contain the pet’s name, an address, and a phone number. Microchips, on the other hand, contain unique ID numbers for the animals, but the chips need to be read with special scanners. 

[Related: QR codes are everywhere now. Here’s how to use them.]

Ring’s Pet Tag offers a QR code. It works just like scanning the QR code for a menu at a restaurant. If someone scans the QR code on the collar of a lost pet that they’ve found, the owner will get a notification that their pet’s tag has been scanned. The person who scanned the tag will be able to view basic information such as a name and a short description of the animal, relevant health needs, and they’ll have the option to press a button in the web portal to start a two-way conversation with the pet’s owner. These are all a part of Ring’s pet profile, which it launched last year. The tag is just a portable way to tether the profile to the actual animal. 

According to Ring, this tag is more privacy-protecting than traditional tags because owners don’t have to list any personal identifying information. It’s pretty low-tech, considering that it doesn’t come with GPS, microphones, or any type of data transmitters (this is unlikely to turn into the CIA’s cat spies project 2.0). Since the Pet Tag costs under $10, it makes it an appealing alternative to other tracking gadgets. 

In this arena, Ring has previously launched other features to help owners find their pets. There is Lost Pet Post, which is a type of post that users can make in the app to alert local Neighbors users, and additionally, Ring integrated with the Petco Love Lost database this summer, which uses facial recognition technology to match lost pets with images from found posts.

The post New tool for finding lost pets is a low-tech twist on old-school tags appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
It’s Fat Bear season again! This is the best feed to keep up with these hairy giants https://www.popsci.com/technology/how-to-watch-fat-bear-cam/ Tue, 12 Sep 2023 19:00:00 +0000 https://www.popsci.com/?p=569719
747 is the winner of Fat Bear Week 2022.
747 is the winner of Fat Bear Week 2022. L. Law/Katmai National Park and Preserve

Watch nature's reality TV.

The post It’s Fat Bear season again! This is the best feed to keep up with these hairy giants appeared first on Popular Science.

]]>
747 is the winner of Fat Bear Week 2022.
747 is the winner of Fat Bear Week 2022. L. Law/Katmai National Park and Preserve

For bears, building a summer body has an unique meaning. Instead of getting beach-ready, these large mammals are loading up on fuel for winter hibernation, which means that their goal is to pack on as much body weight as possible from summer to fall. This is the time for bears to try to eat a year’s worth of food in about six months. The thickest individuals can gain up to 100 pounds during this body transformation period. 

In 2014, a tradition called Fat Bear Week was started by the National Park Service in partnership with multimedia organization explore.org to honor this phenomenon. Now, every year, human participants can vote for their favorite brown bear in Alaska’s Katmai National Park and Preserve in a knockout-style tournament that starts in late September. The fattest bear champ is announced in early October. 

Explore.org, which provides the camera streams into the lives of bears, hosts a series of live cams connected around the world. The seven bear cams, stationed throughout the Alaskan park and even in the streams, have been in operation since 2012. Besides bears, there are other livestreams that peek at orcas in British Columbia, a bat cave in Texas, the kelp forests off the west coast of the US, and birds along the Mississippi (which is called Nestflix), just to name a few. There’s a comment section for users to discuss what they’re seeing, too. 

[Related: Spy tech and rigged eggs help scientists study the secret lives of animals]

There are also cameras set up in more mundane places, like a kitten rescue sanctuary in Los Angeles, California. And these cams have helped natural science researchers understand how animals are behaving in remote locations, providing useful data for conservation efforts. 

Additionally, studies in 2021 and 2022 found that in particular, the bear cams help visitors, both virtual and in person, develop an emotional connection to the well-being of these animals, and they become willing to pay to help with their preservation. The benefit goes both ways, as having this kind of parasocial, intimate relationship with the natural world also boosts visitors’ mental health

[Related: Google is inviting citizen scientists to its underwater listening room]

In some cases, these cameras have come in handy unexpectedly. Last week, stream viewers helped park staff rescue a lost hiker who signaled for help in front of one of the live cameras. “That was a first for the bear cams for sure,” Mike Fitz, a resident naturalist with Explore.org and creator of Fat Bear Week, told The Washington Post

Watch Katmai’s bears on the prowl for grub by opening up one of seven live webcams. (If you get lucky, you might see one of this year’s contestants for Fat Bear Week in action.) If the action on your chosen cam slows down, you can click any of the thumbnails under the main feed to see what’s going on elsewhere in the park. Check out explore.org’s YouTube page for highlight reels.

The post It’s Fat Bear season again! This is the best feed to keep up with these hairy giants appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This wormy robot can wriggle its way around a jet engine https://www.popsci.com/technology/ge-aerospace-sensiworm-robot/ Sat, 09 Sep 2023 11:00:00 +0000 https://www.popsci.com/?p=568999
an inchwoom robot climbing up a smooth surface
Sensiworm can crawl around a jet engine. GE Aerospace

It's soft enough to squeeze into tight spaces.

The post This wormy robot can wriggle its way around a jet engine appeared first on Popular Science.

]]>
an inchwoom robot climbing up a smooth surface
Sensiworm can crawl around a jet engine. GE Aerospace

A new wormy robot could help with jet engine inspections at GE Aerospace, according to an announcement this week. Sensiworm, short for “Soft ElectroNics Skin-Innervated Robotic Worm,” is the newest outgrowth in GE’s line of worm robots, which includes a “giant earthworm” for tunneling and the “Pipeworm” for pipeline inspection. 

Jet engines are complex devices made up of many moving parts. They have to withstand factors like high heat, plenty of movement, and varying degrees of pressure. Because they need to perform at top speed, they often need to undergo routine cleaning and inspection. Typically, this is done with human eyes and with a device like a borescope, which is a skinny tube with a camera that’s snaked into the engine (technically known as a turbofan). But with Sensiworm, GE promises to make this process less tedious and that it could happen “on wing,” meaning the turbofan doesn’t need to be removed from the wing for the inspection. 

Like an inchworm, Sensiworm moves forward on its own using two sticky suction-like parts on its bottom to squish into crevasses and scrunch around the curves of the engine to find areas where there are cracks or corrosion, or check to see if the heat-protecting thermal barrier coatings are as thick as they should be. 

It comes with cameras and sensors onboard, and is attached through a long, thin wire. In a demo video, this robot showed that it can navigate around obstacles, hang on to a spinning turbine, and sniff out gas leaks. 

These “mini-robot companions” could add an extra pair of eyes and ears, expanding the inspection capabilities of human service operators for on-wing inspections without having to take anything apart. “With their soft, compliant design, they could inspect every inch of jet engine transmitting live video and real-time data about the condition of parts that operators typically check,” GE Aerospace said in a press release

“Currently, our demonstrations have primarily been focused on the inspection of engines,” Deepak Trivedi, principal robotics engineer at GE Aerospace Research, noted in the statement. “But we’re developing new capabilities that would allow these robots to execute repair once they find a defect as well.”

Flexible, squiggling robots have found lots of uses in many industries. Engineers have designed them for medical applications, search and rescues, military operations, and even space ventures

Watch Sensiworm at work below: 

The post This wormy robot can wriggle its way around a jet engine appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
What it’s like to step inside a room with no echoes https://www.popsci.com/technology/inside-an-anechoic-chamber/ Thu, 07 Sep 2023 22:00:00 +0000 https://www.popsci.com/?p=568589
wedges in the anechoic chamber and a hand holding a sound level meter
These oddly angled wedges help absorb sound. Charlotte Hu

Anechoic chambers help researchers study the science of sound. Here’s how.

The post What it’s like to step inside a room with no echoes appeared first on Popular Science.

]]>
wedges in the anechoic chamber and a hand holding a sound level meter
These oddly angled wedges help absorb sound. Charlotte Hu

One of the quietest places in New York City is a 520-cubic-foot room at the edge of East Village. Tucked behind a heavy, suctioned door, this tiny space is decorated on all sides with wacky wedge-shaped fiberglass protrusions. Instead of a hardwood or carpeted floor, there is a thin metal mesh that visitors can walk on—under which are more wedges of fiberglass. It’s a full anechoic chamber, and it’s the only one in the city. There are no secrets to be found inside, only silence. The word anechoic itself means “no echo.” It’s the perfect place to study the science of sound. 

Standing inside the room feels like living within a massive pair of noise-canceling headphones. Speech sounds softer, rounder, quieter. Some people’s ears even pop when they enter the chamber. And while it’s silent to me, Melody Baglione, a professor of mechanical engineering at The Cooper Union’s Albert Nerken School of Engineering proves to me that we’re not experiencing absolute silence. As we stay still and hushed, the sound level meter she’s holding drops to around 18 decibels (dBA). When we made the same type of measurement outside in the Vibration and Acoustics Laboratory, the ambient sound level was around 40 dBA. The unit that the meter is measuring is sound pressure in decibels, but weighted towards the frequencies that the human ear is more sensitive to. 

Sound, as we experience it, is a pressure wave propagated by a vibrating object. The wave moves particles in surrounding mediums like air, water, or solid matter. When sound waves enter human ears, they pass as mechanical vibrations through a drum-like membrane and to hair follicles that then send electrical signals to the brain. Hearing loss happens when those hair follicles get damaged. The higher-frequency hair follicles are at the very end of this pathway, and they tend to go first. 

Engineering photo
The chamber is hidden behind a heavy door. Charlotte Hu

“The higher frequencies have wavelengths that are shorter, so higher frequency sound waves interact with the wedges in the anechoic chamber more so than lower frequencies,” says Baglione. “The lower frequencies have larger wavelengths, and they are harder to absorb. You need a bigger room.” There’s human error too—sometimes students drop things through the floor, and that imperfection causes a way for the sound to reflect. 

[Related: A look inside the lab building mushroom computers]

Real estate in New York is at a premium, so this chamber falls on the smaller side when compared to ones at an Air Force base or testing facilities for carmakers. A wide variety of projects have undergone tests in The Cooper Union’s anechoic chamber. Students have characterized drone noises in order to figure out how to cancel those sounds out. They’ve compared the sound quality of a traditional violin versus a 3D-printed one. They’ve tested an internal combustion engine in order to inform muffler designs, sound localization for robots, and even virtual reality headsets. Baglione says that new proposals for using the chamber are always coming in. 

Engineering photo
Ears affect how we perceive sound. Charlotte Hu

How do anechoic chambers work?

Noise, reverberations, and echoes are all around, all the time. To engineer a space that can eliminate everything except for the original sound requires a crafty use of materials and deep knowledge about physics and geometry. 

“Sound quality is often very subjective. Sound is as much a matter of perception and our experience and our expectation,” says Paul Wilford, a research director at Nokia Bell Labs. “In our work, largely through the anechoic chamber, we’ve learned that the sound we hear directly from a source may be actually at a lower level than the sound that’s coming from bouncing off of walls or being reverberated through a room.” The chamber he’s referring to is the one in Murray Hill, New Jersey, which is the first of its kind. Originally constructed in 1947, the room is currently undergoing renovations. 

As a communications company, sound is a big part of what Nokia does. And they needed a way to quantify sound quality so they could design better microphones, speakers, and other devices. “What the anechoic chamber was conceived to be is a powerful environment, a measuring device, an acoustic tool where you can make high-quality, reliable, repeatable, acoustic measurements,” Wilford explains.  

[Related: A Silent Isolation Room For Satellites]

Sound can either be reflected, absorbed, or transmitted through a medium. By studying the physics of how sound propagates through the air, the researchers at Bell Labs came up with wedges that are made of foam-based fiberglass encapsulated in a wire mesh. The impedance of that material is matched to the incoming sound waves so it can absorb it rather than reflect it. The sound waves hit these five-foot-deep cones and get trapped. Wedges are the standard design choice for anechoic chambers. 

“What that means is that if you’re standing in that room, and there’s a source that’s emitting sound, all you hear is that direct source,” Wilford says. “In some sense, it’s the pure sound that you hear.” If there were two people in the room, and one of them turned around and spoke to the wall, then the other person would not be able to hear them. “There are other anechoic chambers around the world now, and because of these properties, these results are repeatable from room to room to room,” he adds. 

What anechoic chambers are used for today

There are lots of ways to analyze sound in this type of echoless room. Scientists can characterize reverberation by timing how long it takes for a certain sound to decay. At Bell Labs, there are high quality directional microphones that are strategically placed in the room along with localization equipment that pinpoints where these microphones are in 3D space. They can use audio spectrum analyzers that look at the frequency response of these microphones, or move a speaker in an arc around the microphone to see how the sound changes as a function of where it is. They can also synchronize sounds and measure interference patterns. 

[Related: This AI can harness sound to reveal the structure of unseen spaces

Engineering photo
Full anechoic chambers have wedges on the floor too. Charlotte Hu

Since its creation, the anechoic chamber has paved the way for many innovations. The electret microphone, which replaced older, clunkier condenser microphones, was invented at Bell Labs. The testing of the frequency response, the performance, was done in the anechoic chamber. A product from AT&T that improved voice quality in long distance phone calls was made possible by understanding sound reflections that were occurring in the network and developing the math needed to cancel out noise signals. 

Recently, the chamber in New Jersey has been used a lot for work with digital twins, Wilford notes, a tech strategy that aims to map the physical world in a virtual environment. Part of that faithful recreation needs to account for acoustics. The anechoic chamber can help researchers understand spatial audio, or how real sound exists in a given space, and how it can change as you move through it. After updates, the chamber will have better localization properties, which will allow researchers to understand how to use sound to locate where objects are in IoT applications

Before I visited the anechoic chamber at The Cooper Union, Wilford shared that being in the room “retunes your senses.” He’s become increasingly aware of the properties of sound in the real world. He could close his eyes in a conference room, and locate where the speaker was, and if they were moving closer or further away, just from how their voices change. The background noises that his mind blocked out suddenly became apparent. 

After I stepped outside the lab in lower Manhattan, I noticed how voices bounced around in the metal elevator, and how the hum from the air conditioner changes pitch slightly as I turn my head. The buzz and chatter of background noise on the streets became brighter, louder, and clearer. 

The post What it’s like to step inside a room with no echoes appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Disruptive water main breaks happen more often than you think https://www.popsci.com/technology/why-water-main-breaks-happen/ Fri, 01 Sep 2023 14:00:00 +0000 https://www.popsci.com/?p=567431
water main break caused water to run on to the times square subway lines
A water main break in Times Square shut down subway service. MTA / Flickr

Blame the old pipes.

The post Disruptive water main breaks happen more often than you think appeared first on Popular Science.

]]>
water main break caused water to run on to the times square subway lines
A water main break in Times Square shut down subway service. MTA / Flickr

This week, a major water main break in Times Square, New York created waves of disruptions at the city’s busiest subway station. Sheets of water dramatically cascaded down onto subway tracks below—an urban underground waterfall. The culprit was a 127-year-old cast iron pipe that was a few years past its expected lifespan, according to AP

Although the event precipitated a mad scramble to dig, scoop, find and fix the mess, not to mention cleaning and pumping up the aftermath, water main breaks are actually not that uncommon. A 2021 report from the American Society of Civil Engineers estimates that there is a water main break every two minutes in the US. The cost for replacing all the pipes in the country before they reach the end of their life will be over $1 trillion

There were around 400 water main breaks in New York City last year, and the metropolis has been spending more than $1 billion to upgrade its approximately 6,800 miles of aging infrastructure, including water and sewer lines. New York isn’t an anomaly either. Los Angeles has also rolled out a $1.3-billion plan to gradually replace the deteriorating pipes that run beneath the major city. 

So what causes a water main to rupture? A variety of factors. Changes in temperature, water pressure, soil conditions, climate change, a stray tree root, as well as ground movements due to construction, earthquakes, and wear and tear can all play a role. The material that makes up the water mains doesn’t matter as much as the age. Existing water mains can be made of iron, cement, and even wood. 

The number one reason that water main breaks happen so often is the age of the US’s water infrastructure. “Imagine putting anything 120 years old and assuming that it’s going to function the way it did 120 years ago with the demands we put on it now,” Darren Olson, vice chair of the 2021 Report Card for America’s Infrastructure and a water infrastructure expert from Chicago, tells PopSci. “When these water mains were put in 120-plus years ago, nobody envisioned what New York would be and the types of stresses on these systems, especially that we’re facing recently with climate change.”  

[Related: How ‘underground climate change’ affects life on the Earth’s surface]

Extreme weather like droughts or floods can create wild fluctuations in the water levels at treatment plants. The swings in temperature, such as the increasingly crazy freeze-thaw cycles in the colder seasons, can cause pipes to expand and contract more dramatically, making them more susceptible to damage. 

“They estimate that 6 billion gallons of treated water is lost each day in the US. That’s like 9,000 swimming pools of water that we just lose due to leaks or water main breaks,” says Olson. And these breaks can cause a ripple of secondary problems. “A water main break can not only flood something, but imagine the businesses that are relying on that water. It could be a manufacturing plant, it could be a restaurant,” he adds. “An estimated $51 billion of economic loss for water-relevant industries occurs each year because of water main breaks.” 

One pipe going down can affect the whole water distribution system. “If you lose pressure in that water system, that pressure is still critical in a water distribution system because it’s forcing the water not only to go to our faucets, but it’s not allowing things to get into the water system,” Olson explains. “Once you have a water main break, and that pressure no longer exists, you can have boil orders on water because there’s not enough pressure on the system to guarantee that you’re keeping other things out of your water supply.” 

To fix a broken pipe, crews have to locate the leak, excavate the segment, isolate it, reroute the water, do the necessary repairs, put it back in service, then put everything else back in order. Water systems are usually looped, meaning there are a number of pathways for water to flow through. Water valve vaults that are stationed throughout the systems allow engineers to shut off a certain section of the system and still keep the unaffected parts in operation.

Small water main repairs could take a couple of hours. For the large water transmission mains, it could take much longer than that. 

[Related: Our infrastructure can’t handle climate disasters. We need to build differently.]

This whole process sounds like a daunting task, and there are so many at-risk pipes. But city engineers are learning to get smart about prevention. One method is using asset management, which is a way of tracking where all of the underground pipes are located, and considering historical issues with them, along with the size, diameter, and material of the pipes. “When you start to look at all of those in a more comprehensive way, you’re able to plan and use the dollars that we have more effectively to replace the oldest, most critical first. And that can help a city better manage their system,” says Olson. 

However, even with good planning, there are certain limitations. One of those is the funding that goes into this effort. “Back in the 1970s, the federal government was contributing 63 percent to all of the water infrastructure that we had,” Olson notes. “Now that’s down to less than 10 percent. It comes down to either the states or the municipalities, or the counties to help to invest in their own systems and fund that investment.” 

The good news is that the 2022 Bipartisan Infrastructure Bill has recognized the need to improve the country’s water infrastructure. “That bill did target money to that [issue] and it’s a good down payment for what we need,” says Olson. “But the need is so vast that that’s just hopefully the start of future federal investment in our water infrastructure.”

The post Disruptive water main breaks happen more often than you think appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Google’s new pollen mapping tool aims to reduce allergy season suffering https://www.popsci.com/technology/google-maps-pollen-api/ Wed, 30 Aug 2023 22:00:00 +0000 https://www.popsci.com/?p=567147
a snapshot of the pollen api tool in google maps
Google

It's a hyper-local forecast, but for pollen.

The post Google’s new pollen mapping tool aims to reduce allergy season suffering appeared first on Popular Science.

]]>
a snapshot of the pollen api tool in google maps
Google

Seasonal allergies can be a pain. And with climate change, we’ll have to prepare for them to get even worse. Already, the clouds of pollen this year have felt particularly potent. Google, in an attempt to help people account for this airborne inconvenience when embarking on outings and making travel plans, has added a tool called Pollen API to its Maps platform

In an announcement this week, the company said that the feature would provide “localized pollen count data, heatmap visualizations, detailed plant allergen information, and actionable tips for allergy-sufferers to limit exposure.” Google also announced other environmental APIs including one related to air quality and another related to sunlight levels. (An API, or application programming interface, is a software component that allows two different applications to communicate and share data.)

These new tools may be a result of Google’s acquisition of environmental intelligence company Breezometer in 2022. Breezometer uses information from various sources such as the Copernicus Atmosphere Monitoring Service, governmental monitoring stations, real-time traffic information, and meteorological conditions in its algorithms and products. And while notable, Google is not the only organization to offer pollen forecasts. Accuweather and The Weather Channel both have their own versions. 

Google’s Pollen API integrates information from a global pollen index that compares pollen levels from different areas, as well as data about common species of trees, grass, and weeds around the globe. According to a blog item, they then used “machine learning to determine where specific pollen-producing plants are located. Together with local wind patterns, we can calculate the seasonality and daily amount of pollen grains and predict how the pollen will spread.” 

Hadas Asscher, product manager of the Google Maps Platform, wrote in another blog post to further explain that the model “calculates the seasonality and daily amount of pollen grains on a 1×1 km2 grid in over 65 countries worldwide, supporting an up to 5-day forecast, 3 plant types, and 15 different plant species.” Plus, it considers factors like land cover, historic climate data, annual pollen production per plant, and more in its pollen predictions. 

Along with a local pollen forecast for up to five days in the future, the tool can also give tips and insights on how to minimize exposure, like staying indoors on Tuesday because birch pollen levels are going to be skyrocketing, or which outdoor areas are actually more clear of allergy triggers. App developers can use this API in a variety of ways, such as managing in-cabin air quality in a vehicle by integrating it into an app available on a car’s display, and advising drivers to close their windows if there’s a patch of high pollen ahead in their route. 

Here’s more on the feature:

The post Google’s new pollen mapping tool aims to reduce allergy season suffering appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Is online gaming a good way for lonely young men to find friends? https://www.popsci.com/technology/online-gaming-friend-loneliness-men/ Sat, 26 Aug 2023 11:00:00 +0000 https://www.popsci.com/?p=565608
boy playing video game
DEPOSIT PHOTOS

It certainly helps, but probably should not be a complete replacement for IRL relationships.

The post Is online gaming a good way for lonely young men to find friends? appeared first on Popular Science.

]]>
boy playing video game
DEPOSIT PHOTOS

We’re in a loneliness epidemic, and it’s become a public health concern. The cutting of in-person social connections during the COVID-era certainly didn’t help. Loneliness isn’t just bad for the soul, but for physical health too, with those isolated being at an increased risk for conditions like heart disease, stroke, and dementia. There are a few theories about how things got so bad. And men, it seems, have gotten the short end of the stick, with numerous surveys indicating that American men are stuck in a “friendship recession.”

However, for some men, though close friendships can be hard, online communities such as those formed through gaming offer an avenue of hope. In a study published earlier this year, researchers from Texas A&M, The University of North Carolina, and Baylor University found that online gaming groups provided the same social support for life events as well as a sense of community compared to in-real-life connections. 

By studying a network of 40 male online gamers who played on the same site for 10 months, the researchers found that these digital relationships were important and helpful for those experiencing depressive symptoms. 

[Related: Dive into the wonderful and wistful world of video game design]

“This finding suggests the chat and community features of online games might provide isolated young men an anonymous ‘third place’ – or space where people can congregate other than work or home – to open up, find empathy and build crucial social connections they may lack in real life,” Tyler Prochnow, assistant professor of Health Behavior at Texas A&M University, wrote in a post in The Conversation this week. “Online social spaces, like gaming communities, may offer an alternative avenue to find connection and discuss serious personal problems without the barriers of formal mental health services…Several participants specifically said they confided about topics they felt unable to discuss with people in their real lives, suggesting these online friendships provided an outlet they were otherwise lacking.”

It’s a complicated matter. Online communities have long been a part of the modern internet, allowing people who are geographically separated to connect based on their hobbies, professions, passions, and beliefs. But their power and influence have only grown in the last two decades. Fun, niche communities started around fandoms can today drive cultural conversations and even movements. Video games themselves have spawned a number of online subcultures

In 2021, a study from NYU found that membership in online communities such as Surviving Hijab or Subtle Asian Traits built a strong sense of belonging among individuals who may be marginalized or isolated in real life. These communities opened a rare safe space that allowed for deeply personal discussions around problems like relationship struggles, health issues, abuse, grief, and loss. Especially during the pandemic lockdown, multiple studies found that community social networks, while not a complete replacement for in-person interactions, were an indispensable tool for alleviating psychological distress, and loneliness

[Related: Can I offer you a nice meme in these trying times?]

Of course, there are possible downsides of relying too much on digital relationships, considering how there are ongoing debates around whether video games are good for mental health, and whether social media has too tight of a grip on the lives of young people

Prochnow, the first author of the recent study, also acknowledged the limits of findings, writing that “a key question is whether online social support directly improves depression – or are depressed individuals simply more inclined to seek connections virtually? Despite a massive industry and audience for online gaming, its mental health impacts remain murky.”

All of this raises the question: Are online friendships as good as the real world ones, or do they simply create a convincing illusion of one? To be safe, it can’t hurt to have both.

The post Is online gaming a good way for lonely young men to find friends? appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
These nifty drones can lock together in mid-air to form a bigger, stronger robot https://www.popsci.com/technology/drones-assemble-air/ Wed, 23 Aug 2023 22:00:00 +0000 https://www.popsci.com/?p=564938
two drones coming together while flying
Drones, assemble. University of Tokyo / Advanced Intelligent Systems

Teamwork makes the dream work.

The post These nifty drones can lock together in mid-air to form a bigger, stronger robot appeared first on Popular Science.

]]>
two drones coming together while flying
Drones, assemble. University of Tokyo / Advanced Intelligent Systems

A drone’s size affects what it can—or can’t—do. If a drone is too small, it may be limited in the types of tasks it can complete, or the amount of heavy lifting it can do. But if a drone is too big, it may be difficult to get it up in the air or have it navigate around tricky structures, but it may make up for that in other ways 

A solution that a group of engineers from the University of Tokyo came up with is to create a set of drone units that can assemble and disassemble in the air. That way, they can break up to fit into tight spaces, but can also combine to become stronger if needed. 

Last month, the detailed design behind this type of system, called Tilted-Rotor-Equipped Aerial Robot With Autonomous In-Flight Assembly and Disassembly Ability (TRADY), was described in the journal Advanced Intelligent Systems

The drones used in the demonstration look like normal quadcopters but with an extra component (a plug or jack). The drone with the plug and the drone with the jack are designed to lock into one another, like two pieces of a jigsaw puzzle. 

[Related: To build a better crawly robot, add legs—lots of legs]

“The team developed a docking system for TRADY that takes its inspiration from the aerial refueling mechanism found in jet fighters in the form of a funnel-shaped unit on one side of the mechanism means any errors lining up the two units are compensated for,” according to Advanced Science News. To stabilize the units once they intertwine, “the team also developed a unique coupling system in the form of magnets that can be switched on and off.”

Engineering photo
The assembly mechanism, illustrated. University of Tokyo / Advanced Intelligent Systems

Although in their test runs, they only used two units, the authors wrote in the paper that this methodology “can be easily applied to more units by installing both the plug type and the jack type of docking mechanisms in a single unit.” 

To control these drones, the researchers developed two systems: a distributed control system for operating each unit independently that can be switched to a unified control system. An onboard PC conveys the position of each drone to allow them to angle themselves appropriate for coming together and apart. 

Other than testing the smoothness of the assembly and disassembly process, the team put these units to work by giving them tasks to do, such as inserting a peg into a pipe, and opening a valve. The TRADY units were able to complete both challenges. 

“As a future prospect, we intend to design a new docking mechanism equipped with joints that will enable the robot to alter rotor directions after assembly. This will expand the robot’s controllability in a more significant manner,” the researchers wrote. “Furthermore, expanding the system by utilizing three or more units remains a future challenge.” 

Engineering photo
Here is the assembled drone units working to turn a valve. University of Tokyo / Advanced Intelligent Systems

The post These nifty drones can lock together in mid-air to form a bigger, stronger robot appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The logic behind AI chatbots like ChatGPT is surprisingly basic https://www.popsci.com/technology/how-do-chatbots-work/ Tue, 22 Aug 2023 13:00:00 +0000 https://www.popsci.com/?p=563434
pastel-colored room with many chairs and many cats perched around the room on chairs and shelves.
AI-generated illustration by Dan Saelinger for Popular Science

Large language models, broken down.

The post The logic behind AI chatbots like ChatGPT is surprisingly basic appeared first on Popular Science.

]]>
pastel-colored room with many chairs and many cats perched around the room on chairs and shelves.
AI-generated illustration by Dan Saelinger for Popular Science

CHATBOTS MIGHT APPEAR to be complex conversationalists that respond like real people. But if you take a closer look, they are essentially an advanced version of a program that finishes your sentences by predicting which words will come next. Bard, ChatGPT, and other AI technologies are large language models—a kind of algorithm trained on exercises similar to the Mad Libs-style questions found on elementary school quizzes. More simply put, they are human-written instructions that tell computers how to solve a problem or make a calculation. In this case, the algorithm uses your prompt and any sentences it comes across to auto-complete the answer.

Systems like ChatGPT can use only what they’ve gleaned from the web. “All it’s doing is taking the internet it has access to and then filling in what would come next,” says Rayid Ghani, a professor in the machine learning department at Carnegie Mellon University.  

Let’s pretend you plugged this sentence into an AI chatbot: “The cat sat on the ___.” First, the language model would have to know that the missing word needs to be a noun to make grammatical sense. But it can’t be any noun—the cat can’t sit on the “democracy,” for one. So the algorithm scours texts written by humans to get a sense of what cats actually rest on and picks out the most probable answer. In this scenario, it might determine the cat sits on the “laptop” 10 percent of the time, on the “table” 20 percent of the time, and on the “chair” 70 percent of the time. The model would then go with the most likely answer: “chair.”

The system is able to use this prediction process to respond with a full sentence. If you ask a chatbot, “How are you?” it will generate “I’m” based on the “you” from the question and then “good” based on what most people on the web reply when asked how they are.

The way these programs process information and arrive at a decision sort of resembles how the human brain behaves. “As simple as this task [predicting the most likely response] is, it actually requires an incredibly sophisticated knowledge of both how language works and how the world works,” says Yoon Kim, a researcher at MIT’s Computer Science and Artificial Intelligence Laboratory. “You can think of [chatbots] as algorithms with little knobs on them. These knobs basically learn on data that you see out in the wild,” allowing the software to create “probabilities over the entire English vocab.”

The beauty of language models is that researchers don’t have to rigidly define any rules or grammar for them to follow. An AI chatbot implicitly learns how to form sentences that make sense by consuming tokens, which are common sequences of characters grouped together taken from the raw text of books, articles, and websites. All it needs are the patterns and associations it finds among certain words or phrases.  

But these tools often spit out answers that are imprecise or incorrect—and that’s partly because of how they were schooled. “Language models are trained on both fiction and nonfiction. They’re trained on every text that’s out on the internet,” says Kim. If MoonPie tweets that its cookies really come from the moon, ChatGPT might incorporate that in a write-up on the product. And if Bard concludes that a cat sat on the democracy after scanning this article, well, you might have to get more used to the idea.

Read more about life in the age of AI:

Or check out all of our PopSci+ stories.

The post The logic behind AI chatbots like ChatGPT is surprisingly basic appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How Formula E race cars are guiding Jaguar’s EV future https://www.popsci.com/technology/jaguar-formula-e-ev/ Mon, 21 Aug 2023 11:00:00 +0000 https://www.popsci.com/?p=563823
Jaguar's Formula E racecar
Jaguar's entry into Formula E is full of intention. Jaguar Racing

Here are some of the key lessons the luxury brand has learned from the race track.

The post How Formula E race cars are guiding Jaguar’s EV future appeared first on Popular Science.

]]>
Jaguar's Formula E racecar
Jaguar's entry into Formula E is full of intention. Jaguar Racing

Jaguar has an ambitious vision to go all-electric by 2025 with a new set of EVs. By 2030, the brand plans to launch e-models of its whole lineup. It joins a suite of other carmakers racing to develop zero-emissions vehicles to fight against climate change. And, on the race track, the luxury brand is already showing off its electric prowess. 

Although Jaguar had a Formula 1 team for a few years in the early 2000s, it took a break and didn’t participate in any motorsport activity after 2004. It returned in 2016 through a new all-electric championship called Formula E.

“It was a very immature series, but it had this ability, this scope to be massive,” Jack Lambert, research innovation manager for Jaguar Motorsport, tells PopSci. When the championship launched, the market was only starting to embrace EVs. “And as the technology developed from Gen 1 to Gen 2, and now Gen 3, the road relevance has developed with it.”    

As summer starts winding down, Jaguar is coming to the end of its ninth season of racing in Formula E. Lambert notes that since the first race, EV technology has rapidly progressed, reshaping how the races look. Next year, the company expects to deploy fast-charging systems in its races next year, which will put that technology to the test. “I would imagine in the next two or three seasons, we would see the pure acceleration capabilities of Formula E cars being able to match that of Formula 1,” he says. “We’re catching up.”

[Related: Electric cars are better for the environment, no matter the power source]

During the early phases of the Gen 1 races, when battery technology was less advanced, teams had to use two cars to complete the approximately 30-mile race. “We would see these really dramatic pit stops where the driver would come in and jump out of the car, basically while it was still moving, and try and jump into another one that’s fully charged,” Lambert says. Just six seasons later, he says, Jaguar’s 500-horsepower electric cars have batteries that last the full 50-minute race. Plus, the cars can pull in 600 kilowatts through regenerative braking, an electric vehicle quirk that can convert the kinetic energy from braking into power that charges the battery. 

Formula 1 vs Formula E

They may look similar on the surface, but at the core, Formula 1 and Formula E races are quite different. Formula 1 is known as a constructors’ series. Each team must design and manufacture every element of the vehicle, and consider how a chassis would work with aerodynamics, power units, braking technology, and all of a car’s other systems. 

Formula E, on the other hand, is a manufacturers’ series, which means that a high percentage of each vehicle is the same. “We place our unique development in only certain areas of the car that are technically regulated by the Fédération Internationale de l’Automobile (FIA). For Formula E, it’s all focused on the powertrain and e-mobility-related technology,” says Lambert. Jaguar’s engineers must figure out how to take power from the battery and get that to the wheel in the most efficient way possible. The crux of their focus is on the inverters, the motors, and the batteries. 

[Related: An inside look at the data powering McLaren’s F1 team

Formula E cars operate the way that all EVs do. The batteries store a big block of chemical energy that needs to be turned into kinetic energy at the tires. “The way you do that is you take the energy that comes out of the wheel in the form of voltage and direct current through an inverter,” Lambert explains. The inverter uses several switching methods to convert direct current into an alternating current, in the form of an oscillating sine wave. The motor, which contains a magnet, has a magnetic field. When the oscillating electric current interacts with the rotor’s magnetic field, it creates torque that translates to a gearbox and ultimately drives shafts and tires. 

Race to road

When Jaguar’s team thinks about race to road technology transfer, they aren’t focused on any specific component. Race cars have dramatically different hardware than any road-bound consumer cars. It’s more about the systems engineering approach to solving big-picture problems, such as how to get electric power from the battery to the tires in the most efficient way. 

“Efficient powertrains in racing allow us to be faster and complete the race distance quicker, but actually, the same technology translated into road allow consumer EVs to go further on one charge,” Lambert explains. “There’s a lot of different approaches and a lot of different technologies that enable that.” 

One good example is their work with semiconductor company Wolfspeed on silicon carbide technology, a material that has been used in Jaguar’s race car inverters since 2017. These types of inverters can expand an EV’s overall range, “but at the time it wasn’t appropriate for the market, given that it was very early in its maturation and it was expensive,” says Lambert. “Now what you’re seeing is the automotive industry is catching up. So all the cars that you’ll see on the road going forward, particularly in the luxury space, will have silicon carbide within their inverters.”

Through racing, Jaguar can also observe how its technology behaves and collect relevant data around performance metrics like acceleration and battery use. And data, like in Formula 1, is a powerful tool for the team. 

The design for Formula E cars are checked over and locked in for two seasons. That means once racing regulators approve a car design, the team can’t really change it. What Jaguar’s engineers and developers can tweak in the off-season is their software. In collaboration with IT company Tata Consultancy Services, Jaguar is building analytics platforms to process and handle all the data—3 terabytes every weekend—generated through the races. This software’s capabilities, as tested through racing, could one day help smart or autonomous vehicles on the road. 

Quite often, when the Jaguar team looks at a new EV innovation, they’ll note that it’s not fully developed for consumer vehicles, but it could be put into a race car. “That becomes an early innovation testbed,” says Lambert. “Rather than having something that lives in the virtual space and in the research for two years, we can quickly turn that into proof-of-concept and put it on a race car.”

The post How Formula E race cars are guiding Jaguar’s EV future appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Inside Delta’s in-house meteorology wing https://www.popsci.com/technology/delta-meteorology-team/ Sat, 12 Aug 2023 11:00:00 +0000 https://www.popsci.com/?p=562558
airplane flying in clouds
Daniela Perez / Unsplash

A “surface desk” and an “upper air desk” help inform routes in tricky weather conditions.

The post Inside Delta’s in-house meteorology wing appeared first on Popular Science.

]]>
airplane flying in clouds
Daniela Perez / Unsplash

Airlines can’t control the weather. They can only do the next best thing, which is to predict upcoming hazards as accurately as possible, as soon as possible, and plan ahead for route disruptions. To do that, they need a team of meteorologists tracking conditions in the sky and on the ground. Delta Air Lines gave PopSci a peek into the inner workings of their weather team. Here’s what we found out. 

“There’s always weather. Every summer, we’re always ready for thunderstorms, we’re always ready for hot temperatures across the desert southwest,” says Warren Weston, lead meteorologist at Delta. Summer brings its unique set of challenges. For this summer in particular, Weston says they observed a fairly persistent high pressure set up across the desert southwest. 

“When we saw the hot temperatures, we started producing a daily forecast for Las Vegas, Phoenix, and Salt Lake City, that was available not only to our decision-makers here within our operation center, but it was also visible to the station managers in the field,” he adds. “And they could look at each day to see what the temperature was each hour, so they would know what hours of the day to expect the highest impact, and we were able to give them this higher resolution data for them to make decisions.” 

[Related: You can blame Southwest Airlines’ holiday catastrophe on outdated software]

Climate change is adding another set of challenges for meteorologists. But weather is an incredibly data-driven field, and Weston hopes that with the learnings they gather each summer, they’ll be able to predict hazardous events in a better and more timely manner. 

Here’s a detailed look at the breakdown of the meteorology team’s job. Delta boasts that it has 23 meteorologists on staff, which is more than any other airline. 

Every day, this team provides weather briefings to the operations operators at airports, and monitors ongoing conditions. They source a great deal of publicly available data from the National Weather Service and the National Oceanic and Atmospheric Administration. The team also uses in-house data for their predictions. For instance, if a plane is flying from point A to point B, and they start receiving turbulence, they can make a pilot report. That report is visible inside of Delta’s operation center, and the team can use that information to refine their turbulence forecast. 

Airline meterologists are the sole weather providers for Delta Air Lines. But every day they collaborate with government meteorologists and other airline meteorologists on highlighted areas of concern, like if a line of thunderstorms is traveling across a specific area in the US. They also collaborate with the air traffic control system command center in Washington DC to give them an idea of what Delta is thinking in terms of tailoring their routes based on the forecasts.

The meteorologists are split into two groups: The “surface desk” and the “upper air desk.”

The surface desk meteorologists look at Delta’s hub airports like Atlanta and New York City closely and puts out detailed hourly forecasts, primarily for the next 30 hours. “On those desks we’re looking for things like thunderstorms, is there going to be low clouds causing fog, or anything that could prevent us from getting into that airport when we attempt to land,” says Weston.

The “upper air desk” looks at high-level turbulence and other conditions such as space weather like solar flares, concentrations of ozone, and even volcanic ash, which can damage an airplane’s engines.

“On the upper air side, most of our forecasts are happening well before the flight is planned. If you think about a 10-hour flight from the US to Europe, you need a forecast that’s valid for the next 10 or 15 hours, not just right now,” Weston says. “We’re looking at turbulence, thunderstorms, and working with our flight planners to find the most efficient route with the least amount of turbulence.”

For example, if there was a snowstorm forecast for New York City, they’ll start issuing updates a few days before the storm gets there to other parts of the operation like the station manager looking at staffing levels. Extra hands may be needed if planes need de-icing. If it isn’t planned for, that can all cause delays. 

[Related: How a quantum computer tackles a surprisingly difficult airport problem]

If a hurricane or severe storm is brewing, the meteorology team has to issue a specific forecast showing when the main impacts will be. “Most of the times in a hurricane you’ll get winds high enough that it’s over the threshold that an airplane is able to land or take off in. So it’s our job to narrow down that time frame to say between this period and this period, conditions are going to be inoperable,” Weston explains. “But, as we get outside of this time period, the winds will come down and we should be able to gradually start operating, and restore operation to a certain region.” 

The team monitors air quality conditions too, not just for seeing whether planes can fly, but for ensuring that the ground crew is staying safe as well. In that respect, wildfire smoke has become an item of note to look out for. “Smoke is very unique because of course we don’t predict the formation of smoke, because that’s predicting a forest fire which we are not in the business of doing,” Weston says. “But our concern with the fire is that it results in mostly air quality issues. If the air quality because of the smoke reaches a certain threshold, then there’s processes in place, like having [the ground crew members] mask, or having them work outside only for a certain amount of time.”

The post Inside Delta’s in-house meteorology wing appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Scientists shake crumpled tinfoil to create electricity https://www.popsci.com/technology/aluminum-foil-electricity/ Fri, 11 Aug 2023 14:00:00 +0000 https://www.popsci.com/?p=562389
food wrapped in aluminum foil
Instead of throwing that aluminum foil away, you could use it as a portable charger. Oscar Söderlund / Unsplash

It could help power small devices (and you can squeeze a quick workout in).

The post Scientists shake crumpled tinfoil to create electricity appeared first on Popular Science.

]]>
food wrapped in aluminum foil
Instead of throwing that aluminum foil away, you could use it as a portable charger. Oscar Söderlund / Unsplash

Shaking around some crumpled balls of tinfoil may not seem like a very productive action. But surprisingly, it can generate enough electricity to power a small LED light. At least that’s what an experiment recently described in the journal Advanced Science shows. 

The crinkled foil balls rattling around are part of a tubular contraption called a triboelectric nanogenerator that the researchers constructed to harness the energy of movement. Here, by playing with the charges generated through contact electrification and electrostatic induction (think static electricity), mechanical energy can be converted into electricity.

The first such device to use this type of physics for power comes from a 2012 study by Zhong Ling Wang from the Chinese Academy of Sciences in Beijing and his colleagues. Similar ideas, like taking the mechanical energy generated by soundwaves and turning them into power, have also been around for a decade or so. 

[Related: How to turn AAA batteries into AAs]

Since then, that idea has been iterated upon many times, with different research groups switching out the materials and trialing various designs. Such technology could have applications for smart homes, multi-purpose clothing, and other remote sensors. 

This recent version in Advanced Science proposes foil balls as a way to both generate electricity but also recycle used aluminum foil that would otherwise go into the bin.  

[Related: Hyperspectral imaging can detect chemical signatures of earthbound objects from space]

According to the paper, this device “primarily comprises an acrylic substrate, a charge-inducing polytetrafluoroethylene (PTFE) layer, aluminum top and bottom electrodes, and crumpled aluminum foil.” The foil balls, which are positively charged, shuttle electrons from one electrode to another as they’re shaken. 

This mechanism, interacting with the air around it, can produce an electric field that plays an important role in the charging and discharging cycle, seen commonly in batteries. This process can produce just a little bit of juice.

While this tiny amount of energy may never be able to power serious electronics like a flatscreen TV, it could be integrated into a light, portable charger. The researchers tested it on smaller devices like 500 light-emitting diodes (LEDs) and 30-W commercial lamps, and it performed well.

Watch the device at work below:

Engineering photo
Son. J et al, Advanced Science

The post Scientists shake crumpled tinfoil to create electricity appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
An art-filled time capsule is headed for the moon https://www.popsci.com/technology/lunar-codex-data-storage/ Sat, 05 Aug 2023 11:00:00 +0000 https://www.popsci.com/?p=561117
Orion space capsule capturing surface of moon during NASA Artemis I mission
On Dec. 5, 2022 during the Artemis I uncrewed mission, Orion captured the moon on the day of return powered flyby, the final major engine maneuver of the flight test. NASA

The creators chose a simple but hardy form of data storage.

The post An art-filled time capsule is headed for the moon appeared first on Popular Science.

]]>
Orion space capsule capturing surface of moon during NASA Artemis I mission
On Dec. 5, 2022 during the Artemis I uncrewed mission, Orion captured the moon on the day of return powered flyby, the final major engine maneuver of the flight test. NASA

An archive of international art is headed to the moon this year. The project, called the Lunar Codex, brands itself as “a message-in-a-bottle to the future, so that travelers who find these time capsules might discover some of the richness of our world today.” It will contain contemporary art, poetry, magazines, music, film, podcasts and books by 30,000 artists, writers, musicians and filmmakers from 157 countries.

The project is run by Incandence, a private company that owns the physical time capsules, the archival technology used in the capsules, and related trademarks, and was thought up by Canadian scientist and author Samuel Peralta, who is the executive chairman of Incandence. 

From 2023 to 2026, in a parallel mission with the Artemis launches, NASA will not only send scientific instruments to the moon, but also carry commercial payloads from partners. Peralta, in July 2020, purchased payload space from Astrobotic Technology, reserving it for the time capsules that would make up the Lunar Codex. Then the submissions rolled in. Artists do not have to pay to be considered, but the works that make it in have all been hand-selected. 

If all goes according to plan, the project will be a permanent installation on the moon, sitting within a MoonPod onboard the lunar lander for the Astrobotic Peregrine Mission 1 scheduled to launch later this year. The team plans to send multiple collections via multiple launches on rockets from SpaceX and the United Launch Alliance. 

Such a message requires an equally enduring medium. The one chosen by Lunar Codex is NanoFiche—a nickel-based material that etches shrunken down versions of texts and photos onto a disc-like surface. According to Lunar Codex, a single disc, which is around 3 centimeters across, can hold hundreds of small square images, each 2,000 pixels by 2,000 pixels in size. They come in sets of three in order to portray color, one channel each for red, green, and blue. 

According to Lunar Codex, each disc “can store 150,000 pages of text or photos on a single 8.5”x11” sheet. It is currently the highest density storage media in the world.” The benefit of these discs is that you can read the data easily with a microscope, or a really powerful magnifying glass, no software needed. It bypasses the difficulties many forms of digital storage have today, which is that digital data, usually kept in the form of bits, can degrade over time. 

Since nickel does not oxidize, degrade, or melt (unless under extreme high temperatures), and can withstand various types of environmental factors that they might have to withstand in outer space like radiation and electromagnetic radiation, it’s the most stable, and probably cheapest form of long-term storage option. The Arch Lunar Library, an effort by the non-profit Arch Mission Foundation to preserve human culture and knowledge, also uses NanoFiche as its preferred form of storage. 

[Related: Inside the search for the best way to save humanity’s data]

This kind of storage does have some limitations. For example, capturing film and music would be tedious and expensive. For film, each frame would have to be etched—a daunting task. As an alternative, screenplays or scripts are captured instead. And for music, it’s represented as sheet music or hex-encoded MIDI files

The Lunar Codex is also experimenting with another way to archive music, by etching their waveform and frequency spectrograms onto NanoFiche. “The original music may be  reconstructed via sound wave analysis algorithms,” Peralta explains on the website. 

Of course, The Lunar Codex isn’t the first project to set foot on the moon. Other than the Arch Mission Foundation’s Lunar library, and an assortment of miscellaneous human trash left behind, there’s also “The Moon Museum” which arrived with Apollo 12 in 1969. It was an etched ceramic wafer smuggled onto a lander leg.

The post An art-filled time capsule is headed for the moon appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
GPT-3 is pretty good at taking the SATs https://www.popsci.com/technology/gpt-3-language-model-standardized-test/ Tue, 01 Aug 2023 19:00:00 +0000 https://www.popsci.com/?p=560421
multiple choice scantron with pencil
Language models are pretty good at taking standardized tests. Nguyen Dang Hoang Nhu / Unsplash

It scored better than the average college applicant, but probably isn’t well-rounded enough to get in.

The post GPT-3 is pretty good at taking the SATs appeared first on Popular Science.

]]>
multiple choice scantron with pencil
Language models are pretty good at taking standardized tests. Nguyen Dang Hoang Nhu / Unsplash

Large language models like GPT-3 are giving chatbots an uncanny ability to give human-like responses to our probing questions. But how smart are they, really? A new study from psychologists at the University of California-Los Angeles out this week in the journal nature human behavior found that the language model GPT-3 has better reasoning skills than an average college student—an arguably low bar. 

The study found that GPT-3 performed better than a group of 40 UCLA undergraduates when it came to answering a series of questions that you would see on standardized exams like the SAT, which requires using solutions from familiar problems to solve a new problem. 

“The questions ask users to select pairs of words that share the same type of relationships. (For example, in the problem: ‘Love’ is to ‘hate’ as ‘rich’ is to which word? The solution would be ‘poor,’)” according to a press release. Another set of analogies were prompts derived from a passage in a short story, and the questions were related to information within that story. The press release points out: “That process, known as analogical reasoning, has long been thought to be a uniquely human ability.”

In fact, GPT-3 scores were better than the average SAT score for college applicants. GPT-3 also did just as well as the human subjects when it came to logical reasoning, tested through a set of problems called Raven’s Progressive Matrices

It’s no surprise that GPT-3 excels at the SATs. Previous studies have tested the model’s logical aptitude by asking it to take a series of standardized exams such as AP tests, the LSATs, and even the MCATs—and it passed with flying colors. The latest version of the language model, GPT-4, which has the added ability to process images, is even better. Last year, Google researchers found that they can improve the logical reasoning of such language models through chain-of-thought prompting, where it breaks down a complex problem into smaller steps. 

[Related: ChatGPT’s accuracy has gotten worse, study shows]

Even though AI today is fundamentally challenging computer scientists to rethink rudimentary benchmarks for machine intelligence like the Turing test, the models are far from perfect. 

For example, a study published this week by a team from UC Riverside found that language models from Google and OpenAI delivered imperfect medical information in response to patient queries. Further studies from scientists at Stanford and Berkeley earlier this year found that ChatGPT, when prompted to generate code or solve math problems, was getting more sloppy with its answers, for reasons unknown. Among regular folks, while ChatGPT is fun and popular, it’s not very practical for everyday use. 

And, it still performs dismally at visual puzzles and understanding the physics and spaces of the real world. To this end, Google is trying to combine multimodal language models with robots to solve the problem. 

It’s hard to tell whether these models are thinking like we are—whether their cognitive processes are similar to our own. That being said, an AI that’s good at test-taking is not generally intelligent the way a person is. It’s hard to tell where their limits lie, and what their potentials could be. That requires for them to be opened up, and have their software and training data exposed—a fundamental criticism experts have around how closely OpenAI guards its LLM research. 

The post GPT-3 is pretty good at taking the SATs appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Robots could now understand us better with some help from the web https://www.popsci.com/technology/deepmind-google-robot-model/ Mon, 31 Jul 2023 11:00:00 +0000 https://www.popsci.com/?p=559920
a robot starting at toy objects on table
This robot is powered by RT-2. DeepMind

A new type of language model could give robots insights into the human world.

The post Robots could now understand us better with some help from the web appeared first on Popular Science.

]]>
a robot starting at toy objects on table
This robot is powered by RT-2. DeepMind

Tech giant Google and its subsidiary AI research lab, DeepMind, have created a basic human-to-robot translator of sorts. They describe it as a “first-of-its-kind vision-language-action model.” The pair said in two separate announcements Friday that the model, called RT-2, is trained with language and visual inputs and is designed to translate knowledge from the web into instructions that robots can understand and respond to.

In a series of trials, the robot demonstrated that it can recognize and distinguish between the flags of different countries, a soccer ball from a basketball, pop icons like Taylor Swift, and items like a can of Red Bull. 

“The pursuit of helpful robots has always been a herculean effort, because a robot capable of doing general tasks in the world needs to be able to handle complex, abstract tasks in highly variable environments — especially ones it’s never seen before,” Vincent Vanhoucke, head of robotics at Google DeepMind, said in a blog post. “Unlike chatbots, robots need ‘grounding’ in the real world and their abilities… A robot needs to be able to recognize an apple in context, distinguish it from a red ball, understand what it looks like, and most importantly, know how to pick it up.”

That means that training robots traditionally required generating billions of data points from scratch, along with specific instructions and commands. A task like telling a bot to throw away a piece of trash involved programmers explicitly training the robot to identify the object that is the trash, the trash can, and what actions to take to pick the object up and throw it away. 

For the last few years, Google has been exploring various avenues of teaching robots to do tasks the way you would teach a human (or a dog). Last year, Google demonstrated a robot that can write its own code based on natural language instructions from humans. Another Google subsidiary called Everyday Robots tried to pair user inputs with a predicted response using a model called SayCan that pulled information from Wikipedia and social media. 

[Related: Google is testing a new robot that can program itself]

AI photo
Some examples of tasks the robot can do. DeepMind

RT-2 builds off a similar precursor model called RT-1 that allows machines to interpret new user commands through a chain of basic reasoning. Additionally, RT-2 possesses skills related to symbol understanding and human recognition—skills that Google thinks will make it adept as a general purpose robot working in a human-centric environment. 
More details on what robots can and can’t do with RT-2 is available in a paper DeepMind and Google put online.

[Related: A simple guide to the expansive world of artificial intelligence]

RT-2 also draws from work done through vision-language models (VLMs) that have been used to caption images, recognize objects in a frame, or answer questions about a certain picture. So, unlike SayCan, this model can actually see the world around it. But to make it so that VLMs can control robots, a component for output actions needs to be added on to it. And this is done by representing different actions the robot can perform as tokens in the model. With this, the model can not only predict what the answer to someone’s query might be, but it can also generate the action most likely associated with it. 

DeepMind notes that, for example, if a person says they’re tired and wants a drink, the robot could decide to get them an energy drink.

The post Robots could now understand us better with some help from the web appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Hyperspectral imaging can detect chemical signatures of earthbound objects from space https://www.popsci.com/technology/hyperspectral-imaging/ Thu, 27 Jul 2023 19:00:00 +0000 https://www.popsci.com/?p=559542
hyperspectral imagery next to black and white imagery
Everything has a unique spectral signature. Pixxel

Here's how it works.

The post Hyperspectral imaging can detect chemical signatures of earthbound objects from space appeared first on Popular Science.

]]>
hyperspectral imagery next to black and white imagery
Everything has a unique spectral signature. Pixxel

Imagine that you could detect the chemical makeup of a car, pipeline, or field of crops from space. In theory, that would allow scientists to identify leaks, runoff, pollution and more from a wide-scanning observatory hundreds of miles away from the target object. 

A technology called hyperspectral imaging makes that possible, and it does so by working with the different wavelengths of light. According to GIS Geography, this approach divides a spectrum of light into hundreds of “narrow spectral bands.” Based on how certain objects transmit, reflect and absorb light, they can be assigned a unique chemical signature. 

The approach is a bit different from other remote sensing approaches that may measure microwaves or radio waves, and is more detailed than other spectral imaging technology that works with fewer bands of light

Everything from trees, soils, metals, paints, and fabrics have a unique spectral fingerprint. Northrop Grumman’s hyperspectral imaging system, for example, can distinguish a maple from an oak tree, and within the tree, healthy growth versus unhealthy growth. 

This fingerprint can allow satellites to pick up on nutrient variations, moisture levels, and more. While fundamentals of the tech has been around since the 1970s, it still needs to be further developed for commercial use and has been heavily investigated by various agencies and research groups for the better part of the last decade

[Related: Google expands AI warning system for fire and flood alerts]

Key hurdles to this technique becoming more common have been bringing down the cost, miniaturizing the materials needed in such a system, developing software and machine learning models that can rapidly process and sort through the data, and getting better image resolution

This year, a growing number of investments by private companies and government agencies around the world are bringing this technique to the forefront—which could be useful in the fields of agriculture, defense, environmental science, industrial settings, forensics, art, medicine, energy, and mining. A research report from Spherical Insights & Consulting predicts that the market for hyperspectral imagery will grow to be worth 47.3 billion by 2032. It is currently valued at around $16 billion.

In March, TechCrunch reported that the US National Reconnaissance Office awarded five-year study contracts worth $300,000 each to BlackSky, Orbital Sidekick, Pixxel, Planet, Xplore and HyperSat to add hyperspectral satellite imagery to its available suite of remote sensing tech. One of the companies, Pixxel, also saw a massive investment in its latest funding round from tech giant Google

The post Hyperspectral imaging can detect chemical signatures of earthbound objects from space appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
DARPA would like to make scrap wood stronger with WUD https://www.popsci.com/technology/darpa-wud/ Mon, 24 Jul 2023 11:00:00 +0000 https://www.popsci.com/?p=558321
wood cross sections
Alexandre Jaquetoni / Unsplash

The program aims to recycle waste from DoD that would head to the landfill otherwise.

The post DARPA would like to make scrap wood stronger with WUD appeared first on Popular Science.

]]>
wood cross sections
Alexandre Jaquetoni / Unsplash

What does the Department of Defense do with scrap wood, cardboard, and paper? Usually, just send them to the landfill. But these seemingly small portions of waste materials do add up—to about 13 pounds per soldier each day. According to the US Army Corps of Engineers, these comprise around 80 percent of all solid waste made at DoD forward operating bases. 

Now, DARPA, a Pentagon agency that focuses on innovation and research, wants to take that waste and divert it from the landfills or burial by turning it into something useful. Through a new program called Waste Upcycling for Defense (WUD), they want to find ways to integrate scrap wood, cardboard, paper, and other cellulose-derived matter into building materials. Scientifically speaking, this is not a new idea. Various teams of researchers have been testing out this process for years. 

The basics of the formula is as follows: You need to chemically treat the scraps to degrade a wood component called lignin, then mechanically press them together to make them more dense, strong, and durable against bad weather, water, and fire. In some instances, these made-again wood products are stronger than the original wood itself. And as attention around climate change focuses on the sustainability of the construction industry, there are increasing efforts to reduce emissions and experiment with greener materials, like green cement and maybe even fungi

[Related: The ability for cities to survive depends on smart, sustainable architecture]

Although small batches of products have been made with this technique at a laboratory scale with harvested wood, these methods have not been tested on scraps, so may need to be adjusted or refined. Ideally, researchers would come up with a solution that can be scaled up for mass production. 

“Finished products could greatly reduce the need for re-supply of traditional wood products, such as harvested lumber used in DoD construction and logistics,” WUD program manager, Catherine Campbell said in a press release

[Related: DARPA wants aircraft that can maneuver with a radically different method]

DARPA aims to develop and test products and methods through a feasibility stage in the next 24 months. At the eight-month-mark, they hope to be able to start conducting mechanical property testing for the samples to figure out ways to reduce chemical and energy consumption in the process. Near the 21-month-mark is when full demos are expected to be ready to be presented to DARPA. The end goal of the Phase I period is to have a preliminary design for a device that can produce densified wood from wood waste at the rate of 100 kg/hr. 

There is an accompanying callout for participation from the scientific community in this effort. Proposals are due by mid-September this year.

The post DARPA would like to make scrap wood stronger with WUD appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The Museum of Failure challenges how we think about mistakes https://www.popsci.com/technology/museum-of-failure/ Fri, 21 Jul 2023 10:00:00 +0000 https://www.popsci.com/?p=557264
Model of the Titanic and the Boeing 737 Max at the Museum of Failure.
The Titanic and Boeing 737 Max are just some of the transportation failures on display at this museum. The Museum of Failure

It's really an ode to innovation.

The post The Museum of Failure challenges how we think about mistakes appeared first on Popular Science.

]]>
Model of the Titanic and the Boeing 737 Max at the Museum of Failure.
The Titanic and Boeing 737 Max are just some of the transportation failures on display at this museum. The Museum of Failure

In the series I Made a Big Mistake, PopSci explores mishaps and misunderstandings, in all their shame and glory.

On my first attempt to find the Museum of Failure, I made a wrong turn while navigating through the massive, often unlabeled buildings nestled in Brooklyn’s Industry City, and accidentally stumbled across The Innovation Lab instead. Consider it a tiny mistake.

The Museum of Failure is not a permanent addition to the eccentric collection of galleries in New York, but is rather a traveling pop-up that aims to engage people all over the world with the idea of making mistakes. It was created in 2016 by Samuel West, an Icelandic-American psychologist who worked with companies on increasing their innovation and productivity. 

Previously, he had found during his research that the most innovative companies were those that have a high degree of exploration and experimentation, which of course, means that they’re also going to have a high rate of failure. By making room for failure, companies could paradoxically tap into an opportunity for learning and for growth. 

Engineering photo
Healthcare-related failures. Charlotte Hu

West started collecting projects that were deemed failures, but he kept looking for a new way to communicate the research. Inspiration struck when he visited the Museum of Broken Relationships in Los Angeles. “Then, I realized that the concept of a museum is very flexible,” he tells PopSci

As of this year, the pop-up has made stops in Sweden, France, Italy, mainland China, and Taiwan. In the US, it was in Los Angeles for a spell, and after New York, they will pack up shop and reopen in Georgetown in Washington, DC on September 8. The reception has surpassed all of West’s expectations. It was so popular in New York that its opening was extended for a month. “I thought it was a nerdy thing,” he says. “And to see how it resonates with people around the world has been fantastic.”  

Engineering photo
Some R-rated failures. Charlotte Hu

The main objective of the museum is to help both organizations and individuals appreciate the important role of failure—a deviation from desired outcomes, if you want to think about it in a more clinical way—in progress and innovation. “If we don’t accept failure as a way forward and as a driving force of progress and innovation, we can’t have the good stuff either,” West says. “We can’t have the tech breakthroughs or the new science, and products. Even ideologies need to fail before we figure out what works.” 

Despite being marked as failures, most of the items in the museum are actually innovations, meaning that they tried something novel, and attempted to challenge the norm by proposing something that was interesting and different. 

The museum itself felt like a small expo center, and is composed of a hodgepodge of stalls that group products loosely together by categories and similarity. There’s not a set way to move through the space. “A lot of people, I’ve found, are sort of lost without a path, and they kind of don’t know where to begin,” says Johanna Guttmann, a director of the exhibit. “The people that really get it automatically are in product design, or marketing.” 

Engineering photo
The Hula chair is free for visitors to try. Charlotte Hu

The team has also designed an accompanying app that guides museum-goers through the various items on display. The app has a QR code scanner, which takes users inside a back catalog of more than 150 “failures” across themes like “the future is (not) now,” “so close, and yet,” “bad taste,” “digital disasters,” “medical mishaps,” and more. Each product on show not only comes with a detailed description of its history and impact, but is also ranked on a scale of one to eight for innovation, design, execution, and fail-o-meter. 

Familiar names of people, companies, and products pepper the exhibit, like Elon Musk, Theranos, MoviePass, Fyre Festival, Titanic, and Google Glass. It features both the notorious and newsworthy, from Boeing 737 Max, CNN+, Facebook Libra, Hooters Air, to Blockbuster. Donald Trump has his own section. “I like the ones with the good story,” West says.

Some of these stories are intended to challenge the perception of failure. “The reason for failure many times is outside of you doing something wrong,” says Guttmann. For many products, it was a case of bad timing, money issues, and in the case of the Amopé Foot File, it was so successful at doing what it was supposed to do that it was a failure for the company profit-wise. 

Engineering photo
Some failures, like the Nintendo Power Glove, inspired later successes. Charlotte Hu

“Kodak invented the digital camera in the 70s only to be bankrupt by digital photography,” says West. “So it was a failure not of tech, but a thing of adapting and updating their business model.” 

Some failures showcase the importance of persistence and reiterating on certain ideas. “Nintendo, for example, tried early on in the 90s to make their games more interactive and immersive by making them 3D,” West noted. They made a 3D console that was terrible and gave kids headaches, and they made the poorly received Power Glove that hooked up to a TV through wonky antennas. Even though the execution was bad, the idea of motion control stuck, and Power Glove became a precursor for the popular Nintendo Wii console. 

Engineering photo
The post-it wall. Charlotte Hu

Placed at the end of the exhibit is a wall titled “Share your failure,” and it’s plastered with sticky notes. This is Guttman’s favorite part of the experience. “It has taken on a life of its own,” she says. People leave both funny and serious anecdotes behind, including microwave failures, relationship disasters, and personal tragedies. “The anonymity is part of the appeal,” she notes. 

Guttman likes to say that the 150 or so items in the museum are really just props for conversation. She sees its potential for opening dialogues around the culture, especially since in countries like the US, there’s encouragement to move fast and break things, or “fake it until you make it,” whereas in other countries, there is a greater emphasis on constant perfectionism. For certain people, the experience has been cathartic—West recalls visitors who have cried at the wall of failure. 

Guttman heard recently in a podcast from an expert who said that too much of US education is designed for success. “His point was that every semester should involve a task that’s designed for failure because otherwise, the students build no resilience, and they don’t know what to do with frustration,” she says. “We say that failure is a part of life, and in an educational setting in particular, students should experience some type of failure to learn that they should try different things, deal with it in different ways.”

The post The Museum of Failure challenges how we think about mistakes appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The first container ship fueled by food scraps is ready to set sail https://www.popsci.com/technology/maersk-green-methanol-ship/ Wed, 19 Jul 2023 19:00:00 +0000 https://www.popsci.com/?p=557793
the view from on a container ship
The shipping industry represents a significant source of carbon emissions. Erik Olsen

The fuel source comes from methane emitted in landfills.

The post The first container ship fueled by food scraps is ready to set sail appeared first on Popular Science.

]]>
the view from on a container ship
The shipping industry represents a significant source of carbon emissions. Erik Olsen

The shipping industry is important in linking together the global supply chain, but it’s fairly carbon intensive. Recent data suggest that shipping makes up around 3 percent of global carbon emissions, and there’s been much effort directed at making different parts of the process more green. 

Shipping giant Maersk has been investing more than $1 billion on ships that run on green methanol, which is derived from the methane that is produced by food waste in landfills. This week, a new container ship from the company that is powered by that type of fuel plans to set sail from South Korea to Denmark, according to a report from Fast Company, and will be the first ever to use green methanol. 

Some metrics about the ship, which doesn’t yet have a name: The vessel is 172-meters-long (around 560 feet), with a container capacity of 2,100 TEU and is designed to reach speeds of 17.4 knots.

CNBC estimated that each of these ships costs around $175 million. Maersk has 25 of these ships ordered, and is working to retrofit old ships to run on the new fuel, which means that they get new engine parts, and need new fuel tanks, along with a fuel preparation room and fuel supply system. 

[Related: Birds sometimes hitch rides on ships—and it’s changing the way they migrate]

The company’s vision for the next decade is to transform a quarter of its fleet to run on this fuel. While the green methanol will still generate some emissions as it’s used up, compared to traditional diesel, it can cut emissions by more than 65 percent. Maersk has figured out a couple of ways to make green methanol. Other than collecting methane gas from waste and turning that into bio-methanol, Maersk can also make green e-methanol (which combines captured carbon dioxide and green hydrogen) with renewable energy. Although it was reported last year by Quartz that Maersk was worried about finding enough green fuel for its ships, it has been signing deals with countries and companies to streamline the supply of green methanol and put these ships to work

Morten Bo Christiansen, who leads decarbonization at Maersk, said at the TED Countdown Summit last week that bringing down the cost is the next problem to tackle, as reported by Fast Company. But currently, the extra cost associated with green methanol compared to diesel is equivalent to consumers paying five extra cents for a pair of sneakers crossing the ocean. 

The post The first container ship fueled by food scraps is ready to set sail appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
I learned stick shift from the pros—here’s how it went https://www.popsci.com/technology/how-to-drive-stick-shift-ford/ Mon, 17 Jul 2023 11:00:00 +0000 https://www.popsci.com/?p=556945
Ford Bronco
I learned to drive stick in a Ford Bronco. Charlotte Hu

No PRNDLs here.

The post I learned stick shift from the pros—here’s how it went appeared first on Popular Science.

]]>
Ford Bronco
I learned to drive stick in a Ford Bronco. Charlotte Hu

It’s noon on a hot July day in New York City, and the thermometer reads 100 degrees Fahrenheit. I’m sitting in a red Ford Bronco overlooking an empty parking lot outside of Citi Field in Queens, learning to drive stick shift for the first time. My instructor is Autumn Schwalbe, Ford performance marketing specialist and a professional drag racer. 

Some car purists, and many Europeans, believe that you don’t truly know how to drive until you’ve mastered a manual. Despite making up less than 2 percent of annual car sales, manual cars could be making a small comeback, according to recent data—and the growing popularity is seen chiefly among drivers in their 20s.

I learned to drive on an automatic-transmission vehicle, like my parents and my peers. Dipping my toes into driving a manual was definitely a new endeavor, but one I felt excited to take on. Here’s how it went.

How to drive a stick shift, 101

Driving a manual transmission is a very involved process, and to me it feels almost like playing the piano. Like a piano, a manual car sports three pedals instead of the two in an automatic transmission car. The pedal on the left is the clutch, and the other two are the brake and the accelerator, which are in their normal spots, of course. There’s also a parking brake, which in the Bronco is located on the left of the steering wheel. 

“The basics of a manual: Always have your foot on the clutch, foot on the brake, and while you’re shifting, you’ll have your foot off the brake, but you’ll always use the clutch to shift the gear,” Schwalbe tells me. “You always want to be focused while you’re driving.”

Vehicles photo
The inside of the Ford Bronco. Charlotte Hu

The gears in a manual transmission car work similarly in principle to the gear shifts on a bicycle; certain sizes are better for achieving certain speeds. Most manual cars today have four or five forward gear ratios, although some can come with more. The Bronco has a seven-speed manual, but for today, we’re only working with first and second-gear (blame the limited space in the parking lot). Lower gears offer slower speeds but more torque, and as you go up each gear, the speeds start increasing, and the torque decreases. 

[Related: The Ford Bronco is back and ready to take on the Jeep Wrangler in new ways]

“Parking brake is very important, because once you take your foot off the brake, foot off the clutch, everything, if you did not have your parking brake on, you could roll, since it’s in neutral, and nothing’s controlling it,” says Schwalbe.  

In the Bronco, there’s a mechanism that sits on the stick shift like a turtleneck that can slide up and down. To reverse, you pull the component up and move the stick into the “R” gear. 

With the overview done, we’re prepping for the drive. First, to start the car, I have to press my foot all the way down on the clutch, and have my right foot on the brake, before I can hit the engine-on button. 

The car roars to life. Now, we have to move. Schwalbe is coaching me from the passenger’s seat. Once I shifted into first gear, I lifted my foot off the brake pedal, and set it on the accelerator. To refrain from lurching forward or stalling, I’m slowly lifting my foot off the clutch and then tapping down on the gas to move it forward. To stop, I push the clutch in and press down on the brake. We repeat this process in a lap or two around the open lot before we get ready to shift gears. 

We’re cruising at 10 mph in first gear, and before we can shift up a gear, the foot comes off the gas, the clutch needs to be pushed in. “You always have to have your foot on the clutch to shift,” Schwalbe notes. 

We practice slowing, speeding up, and shifting gears for a couple more laps. Being in a manual car definitely makes you feel more present and engaged, as you’re evaluating both the vehicle and the surroundings constantly, and moving the car’s components to adjust. By the end of the lesson, I feel comfortable with it, but am in no way ready to hit the road. Baby steps. 

In the Dark Horse

To showcase the true range of a stick shift car, NASCAR driver Ryan Blaney took me for a spin in the Mustang Dark Horse, a sleek, ground-hugging vehicle designed to get up to high speeds, fast. Zooming around the lot, deftly maneuvering around the edges, made me understand why drivers have to work out their necks to withstand the G-forces of racing. For a brief moment while we were going straight, I had a sensation like I was on a roller coaster. 

[Related: An inside look at the data powering McLaren’s F1 team]

Blaney notes that he’s used to the heat. NASCAR vehicles don’t come with AC that’s blasting us at the moment. A luxury. This event is falling in the middle of the season, which is made up of 38 races spanning from February to November. The next one is in Chicago. Racing is a cool gig, but free time is sparse. They only get a week of break during the season. “Weddings are hard to get to,” he says. “All the NASCAR drivers get married in the winter, because it’s our off-time.” 

NASCAR race cars all have manual transmissions, because it provides more control over the performance of the vehicle. Driving well on a manual takes skill, I have learned. Skills that Blaney clearly has, and I probably will never attain. Specifically, Blaney uses H pattern shifters while racing, although NASCAR has been testing sequential shifters, too. Formula 1 race cars, on the other hand, use a type of semi-automatic gearbox that’s controlled with paddle shifts.

In my experience, I’ll say that it’s not the hardest new skill to pick up over a weekend. And I guess like all the old trends that are new again, nostalgia (along with a cheaper price tag if you’re in the market for a car) is too great of a lure to resist.

The post I learned stick shift from the pros—here’s how it went appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
General Motors wants to predict when battery fires might happen https://www.popsci.com/technology/gm-ev-battery-software/ Thu, 06 Jul 2023 22:00:00 +0000 https://www.popsci.com/?p=553936
The Ultium platform is the foundation of GM’s EV strategy, including the battery cells, modules and pack, plus drive units containing electric motors and integrated power electronics
The Ultium platform supports GM’s EV architecture and includes the battery cells, modules and pack, plus drive units containing electric motors and integrated power electronics. GMC / GM

Through a new acquisition, GM now has software that can identify anomalies in EV cells.

The post General Motors wants to predict when battery fires might happen appeared first on Popular Science.

]]>
The Ultium platform is the foundation of GM’s EV strategy, including the battery cells, modules and pack, plus drive units containing electric motors and integrated power electronics
The Ultium platform supports GM’s EV architecture and includes the battery cells, modules and pack, plus drive units containing electric motors and integrated power electronics. GMC / GM

Last week, General Motors (the company that owns the Chevrolet, Buick, GMC, and Cadillac car brands) announced that they had bought a software startup called ALGOLiON that specializes in predicting EV battery fires. 

Specifically, “the software uses sophisticated algorithms to identify miniscule changes that could impact battery health weeks earlier than other methods in use today without additional hardware or sensors all while the battery is still operating properly,” according to a press release GM put out about the acquisition.  

ALGOLiON was founded in 2014 by a pair of battery industry experts. The key product it developed was software called AlgoShield, which can detect early warning signs that lead to battery hazards. The company explained on its website that its software uses “patented quantitative algorithm systems” that analyze changes in electrical signals from the battery in charge and discharge modes. It also monitors DC currents and voltage signals coming from the device for abnormalities that could indicate the presence of defects. 

[Related: Electric cars are better for the environment, no matter the power source]

This technique allows the company to see hazard events that could lead up to a thermal runaway reaction, because it aims to catch early warning signs that come before a temperature rise can be gauged by external sensors. The company has tested its tech in various labs across Europe, the US, and the UK. 

Lithium-ion battery fires have become a growing concern for manufacturers of consumer electronic devices from cell phones to e-bikes to cars. GM had to recall thousands of Chevy Bolt EVs in 2021 because of underlying issues with the battery. It was a very expensive ordeal. (The company has since discontinued the Bolt, which used an older form of battery tech, to focus on making vehicles just with its newest battery system, called Ultium.)

Failures with battery cells can be due to design flaws, or wear and tear combined with the wrong mix of temperature and motion. And of course, EV battery fires aren’t just a GM issue. Tesla has also experienced bouts of bad publicity around this problem. And Ford had to recall a dozen F-150 Lightnings after an incident earlier this year. While gas and diesel cars can certainly also catch on fire, EV fires are sometimes stubborn to put out

The car industry is aware of the potential risks, and has been researching ways to make components less flammable, more compartmentalized, or created with new materials.

The post General Motors wants to predict when battery fires might happen appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Peek inside the lab working on quantum memories https://www.popsci.com/technology/aws-quantum-memory/ Sat, 01 Jul 2023 11:00:00 +0000 https://www.popsci.com/?p=552794
the bottom of AWS' chandelier for quantum networks
A quantum memory is the heart of a quantum network. Charlotte Hu

Amazon’s secret ingredient? Synthetic diamonds.

The post Peek inside the lab working on quantum memories appeared first on Popular Science.

]]>
the bottom of AWS' chandelier for quantum networks
A quantum memory is the heart of a quantum network. Charlotte Hu

Almost a year ago, Amazon Web Services (AWS) announced that it was partnering with Harvard University to test and develop a quantum network. In late June this year, AWS opened up its labs and let media outlets, including Popular Science, peep at its early models of a quantum repeater, which is similar to a classical amplifier that carry optical signals down long stretches of fiber. 

“We’re developing the technology for quantum networks. They are not fully baked,” says Antia Lamas-Linares, head of the AWS Center for Quantum Networking. “There’s a lot of these technologies that have been partially demonstrated in academic labs that still need quite a lot of development to get to what we call a fully fledged quantum network.”

So what’s the point of this kind of technology? A quantum network could be used to distribute cryptographic keys without having to go through an intermediate party, or create anonymous multi-user broadcasts. 

The challenges of making a quantum network

In a quantum network, instead of communicating with classical bits that are one or zero, off or on, there are quantum bits, or qubits, that can be in a superposition of one and zero at the same time. Computer scientists can entangle these qubits and take advantage of their quantum properties to carry out interesting computations that would be hard, resource-intensive, or impossible to do classically.

Engineering photo
The concept behind AWS’ quantum network. Charlotte Hu

But similar to classical systems, in order to have a network, the team needs to be able to generate the qubits, move them around, and store them. And a really great way to move them around is with photons, or pieces of light, explains Nicholas Mondrik, quantum research scientist at AWS. They travel well, and “you can, with a little bit of cleverness, encode your qubit in a photon,” he says.

[Related: Chicago now has a 124-mile quantum network. This is what it’s for.]

Light is already used in classical fiber optics systems to carry information over long distances. The problem with this method is that after about 62 miles, things start to get choppy. That’s where optical amplifiers come into play. They can detect when light gets weaker, and boosts it up before sending it down the line. However, the optical amplifier and other devices used to pass down the light signal forces that light to make a choice between one or zero, says Mondrik, and it doing so would destroy the quantum information carried on the photon.  

Engineering photo
A prototype packaged quantum memory device. Charlotte

One of the key innovations AWS has been workshopping is a quantum equivalent of a signal amplifier, called a quantum repeater. 

Diamonds are a quantum researcher’s best friend

To make a quantum repeater, they first needed to figure out how to make quantum memory—something that can store a qubit. That way, it can catch the incoming photon and allow it to be processed, before sending it on its way. The solution: synthetic, “quantum-grade” diamonds. 

Engineering photo
Model of a diamond lattice with silicon vacancies in the center. Charlotte Hu

“Within the structure of diamonds, sometimes you get defects. Sometimes you get diamonds that are not transparent, that have colors and hues. Those things are called color centers and they are impurities in the diamond,” says Lamas-Linares. “It turns out those impurities behave like an artificial atom, and you can use them to store the state of a photon. These interesting colors are what allows us to interface with light, and store the state of the flying qubit and manipulate that state.” 

The way to make the diamond suitable for storing photons is to first create “silicon vacancies.” To do that, researchers take a diamond that’s as pure carbon as possible and bombard the carbon-lattice with silicon atoms. These silicon atoms will knock out a couple of carbons, take their place, and behave as a fixed atom in the diamond lattice that can interact with photonic qubits through electrons. 

To guide the photon to the electron on the silicon atom, researchers built nanocavities around the silicon vacancy that essentially acts like a set of mirrors that direct the light to where it needs to go. 

Engineering photo
The chandelier holding the quantum memory. Charlotte Hu

For this process to work, the team needs to stop the diamond structure from vibrating; they do this by cooling it down to near absolute zero. The device they use to do this is the same chandelier-like dilution refrigerator-vacuum-thermal shield combination that’s used for superconducting quantum computers. But this infrastructure is notably smaller (about half the size), and tail attachment is completely different. 

“This is where the silicon vacancy, where this diamond memory lives,” Mondrik says. “In order to make silicon vacancies work as a quantum memory, as a qubit, you need to put them in a strong tunable magnetic field.” Therefore, there are additional structures on the bottom of the chandelier that allows for a superconducting magnet to be attached before the thermal shield gets put on outside. 

There’s also a piezoelectric stack that helps researchers steer things around, a microwave line that helps them manipulate the qubit, an optical fiber that transfers light into the diamond cavity, and a microscope imaging system that extends from the bottom of the chandelier to the top to let researchers see what they’re doing. 

But not all the science is done in the dilution refrigerator. There are also room temperature workspaces stationed around the lab where qubits get made, measured, and characterized. 

Engineering photo
The room-temperature workspace for quantum researchers. Charlotte Hu

In its current form, the contraption where all the different components come together seems like a complicated assembly of wires, metals, and lenses. But eventually, the team wants to compress this technology into one singular, adaptable piece of hardware that they can drag and drop onto any type of quantum computing device. 

The post Peek inside the lab working on quantum memories appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How bits, bytes, ones, and zeros help a computer think https://www.popsci.com/technology/bit-vs-byte/ Thu, 29 Jun 2023 20:45:00 +0000 https://www.popsci.com/?p=552339
A close-up of a Raspberry Pi computer circuit board.
Bits and bytes are the fundamentals of computing. Harrison Broadbent / Unsplash

These computing units add up to make complex operations.

The post How bits, bytes, ones, and zeros help a computer think appeared first on Popular Science.

]]>
A close-up of a Raspberry Pi computer circuit board.
Bits and bytes are the fundamentals of computing. Harrison Broadbent / Unsplash

Computers today are capable of wonderful marvels and complex calculations. But if you break down one of these problem-solving engines into its essentials, at the heart of it you’ll find the most basic unit of memory: a bit. Bits are tiny, binary switches that underlie many of the fundamental operations computers perform. It is the smallest unit of memory that exists in two states: on and off, otherwise known as one and zero. Bits can also represent information and values like true (one) and false (zero), and are considered the language of machines. 

Arranging these bits into clever and intricate matrices on semiconductor chips allow computer scientists to perform a wide variety of tasks, like encoding information and retrieving data from memory. As computer scientists stack more and more of these switches onto a processing unit, the switches can become unwieldy to manage, which is why bits are sometimes organized into sets of eight, also known as a byte. 

Bits vs. bytes

The value states that can be represented in bits can grow exponentially. So if you have eight bits, or a byte, you can represent 256 states or values. Counting with bits works a little like counting on an abacus, but the column values are orders of two (128, 64, 32, 16, 8, 4, 2, 1). So while zero and one in the decimal number system correspond to zero and one in the binary number system, two in decimal is 10 in binary, three in decimal is 11 in binary, and four in decimal is 100 in binary. The biggest number you can make with a byte is 255, which in binary is 11111111, because it’s 128+64+32+16+8+4+2+1.

You can also represent more complex information with bytes than you can with bits. While bits can only be one or zero, bytes can store data such as characters, symbols, and large numbers. 

[Related: The best external hard drives of this year]

Bytes are also commonly the smallest unit of information that can be “addressed.” That means that bytes can literally have addresses of sorts that tell the computer which cross wires (or cross streets, if you want to imagine a chip as a tiny city) to retrieve the stored value from. All programs come with pre-made commands, or operation codes, that correlate addresses with values, and values with variables. Different types of written codes can correlate the 256 states in a byte to items like letters. For example, the ASCII code for computer text (which assigns numeric values to letters, punctuation marks, and other characters) says that if you have a byte that looks like 01000100, or the deci-numeral 68, that corresponds to an uppercase “D.” By ordering the bytes in interesting combinations, you can also use codes to make colors

Bytes as a unit let you gauge the amount of memory you can store for different types of information. For example, if you were to type a note with 1,000 individual letters, that would take up 1,000 bytes of memory. Historically, because the industry wanted to keep count in binary, it still used units like kilobytes, megabytes, gigabytes, and terabytes, but here’s where it gets even more complicated: A kilobyte is not always 1,000 bytes (as the prefix would have you assume).

[Related: Best cloud storage services of this year]

In fact, a kilobyte is actually 2^10, or 1,024 bytes. The same can be said for the other units of memory—they’re a rough representation for bytes. A gigabyte is slightly larger than a billion bytes (2^30), and a terabyte is roughly a trillion bytes (2^40). Special prefixes, like kibi, mebi, gibi, were later introduced to account for the differences, although many computer scientists still prefer to stick with the old naming system.

Internet speed is measured in bits 

Although data volume is measured in bytes (the largest hard drive in the world has around 100 terabytes of storage), data speeds, like those offered by internet companies telling you how fast certain services are, tend to be measured in bits. That’s because the internet shuttles data one single bit at a time. 

Think of it like a stream of ones and zeros. For example, the bytes making up an email are chopped up into their constituent bits on one end, and reassembled (sometimes out of order) on the other end. 

The post How bits, bytes, ones, and zeros help a computer think appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This robot used a fake raspberry to practice picking fruit https://www.popsci.com/technology/raspberry-picking-robot/ Sat, 24 Jun 2023 11:00:00 +0000 https://www.popsci.com/?p=550940
fake raspberry for testing robot pickers
EPFL CREATE

Soon, it will leave the lab for a real world test.

The post This robot used a fake raspberry to practice picking fruit appeared first on Popular Science.

]]>
fake raspberry for testing robot pickers
EPFL CREATE

It’s summer, and raspberries are in season. These soft, tartly sweet fruits are delicious but delicate. Most of the time, they have to be harvested by human hands. To help alleviate labor costs and worker shortages, a team at École polytechnique fédérale de Lausanne’s Computational Robot Design & Fabrication Lab (EPFL CREATE) in Switzerland made a robot that knows how to gently support, grasp, and pluck these berries without bruising or squishing them in the process. Their approach is detailed this week in the journal Communications Engineering

Agriculture, like many other fields that have scaled up dramatically over the last few decades, has become increasingly reliant on complex technology from sensors to robots and more. A growing number of farmers are interested in using robots for more time-intensive tasks such as harvesting strawberries, sweet peppers, apples, lettuce, and tomatoes. But many of these machines are still in an early stage, with the bottleneck factor being the inefficient and costly field trials companies typically have to undergo to fine tune the robot. 

The EPFL team’s solution was to create a fake berry and stem for the robot to learn on. To familiarize robots with picking raspberries, the engineers made a silicone raspberry with an artificial stem that “can ‘tell’ the robot how much pressure is being applied, both while the fruit is still attached to the receptacle and after it’s been released,” according to a press release. The faux raspberry contains sensors that measure compression force and pressure. Two magnets hold the fruit and the stem together. 

[Related: This lanternfly-egg-hunting robot could mean fewer bugs to squish]

In a small test with real raspberries, the robot was able to harvest 60 percent of the fruits without damaging them. That’s fairly low compared to the 90 percent from human harvesters on average, signaling to the team that there are still kinks to work out. For example, the robot’s range of reach is not great, and it gets confused when the berries are clustered together. 

Making a better fake raspberry could help the robot improve. Moreover, building an extended set that can simulate “environmental conditions such as lighting, temperature, and humidity could further close the Lab2Field reality gap,” the team wrote in the paper.

For now, the next step for the engineers is to modify the controllers and develop a camera system that “will allow robots to not only ‘feel’ raspberries, but also ‘see’ where they’re located and whether they’re ready to be harvested,” Josie Hughes, a professor at EPFL CREATE noted in the press release. 

They plan to put their pre-trained robot in a real field this summer to see how well it performs during the height of the local raspberry season in Switzerland. If the tech works as planned, the team wants to look into expanding its fake fruit repertoire to potentially cover berries, tomatoes, apricots or even grapes. 

Watch the robot and fake raspberry system at work from a trial run last year: 

The post This robot used a fake raspberry to practice picking fruit appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Microsoft sets its sights on a quantum supercomputer https://www.popsci.com/technology/microsoft-quantum-supercomputer/ Thu, 22 Jun 2023 22:00:00 +0000 https://www.popsci.com/?p=550613
Microsoft's quantum computing infrastructure
Topological qubits are part of Microsoft's quantum strategy. Microsoft Azure

It’s also pioneering a new type of "topological" qubit.

The post Microsoft sets its sights on a quantum supercomputer appeared first on Popular Science.

]]>
Microsoft's quantum computing infrastructure
Topological qubits are part of Microsoft's quantum strategy. Microsoft Azure

Earlier this week, Microsoft said that it had achieved a big milestone towards building their version of a quantum supercomputer—a massive machine that can tackle certain problems better than classical computers owing to the quirks of quantum mechanics. 

These quirks would allow the computer to represent information as zero, one, or a combination of both. The unit of computing, in this case, would no longer be a classical zero or one bit. It would be a qubit—the quantum equivalent of the classical bit.  

There are many ways to build a quantum computer. Google and IBM are using an approach with superconducting qubits, and some smaller companies are looking into neutral atom quantum computers. Microsoft’s strategy is different. It wants to work with a new kind of qubit called topological qubits, which involves some innovative and experimental physics. To make them, you need semiconductor and superconductor materials, electrostatic gates, special particles, nanowires, and a magnetic field. 

[Related: Quantum computers are starting to become more useful]

The milestone that Microsoft claimed to have passed has to do with the fact that they proved in a peer-reviewed paper in the journal Physical Review B that their topological qubits can act as small, fast, and reliable units for computing. (A full video explaining how the physics behind topological qubits work is available in a company blog post.)

Microsoft’s next steps will involve integrating these units with controllable hardware and a less error-prone system. The qubits currently being researched tend to be fragile and finicky, prompting scientists to come up with methods to correct or work with the errors that might arise due to interference from the environment that make the qubits fall out of their quantum states.  

Alongside ensuring that the system can come together cohesively, Microsoft researchers will also improve the qubits themselves so they can achieve properties like entanglement through a process called “braiding.” This characteristic will allow the device to take on more complex operations. 

Then, the company will work to scale up the number of qubits that are linked together. Microsoft told TechCrunch that it aims to have a full quantum supercomputer made of these topological qubits within 10 years.

The post Microsoft sets its sights on a quantum supercomputer appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Quantum computers are starting to become more useful https://www.popsci.com/technology/quantum-computer-error-technique/ Sat, 17 Jun 2023 11:00:00 +0000 https://www.popsci.com/?p=549382
cryostat infrastructure of a quantum computer
This method helps quantum computers give more useful answers. IBM

A new development could be promising for researchers in fields like material science, physics, and more.

The post Quantum computers are starting to become more useful appeared first on Popular Science.

]]>
cryostat infrastructure of a quantum computer
This method helps quantum computers give more useful answers. IBM

This week, IBM researchers published the results of a study in the journal Nature that demonstrated the ways in which they were able to use a 100-plus-qubit quantum computer to square up against a classical supercomputer. They pitted the two against one another with the task of simulating physics. 

“One of the ultimate goals of quantum computing is to simulate components of materials that classical computers have never efficiently simulated,” IBM said in a press release. “Being able to model these is a crucial step towards the ability to tackle challenges such as designing more efficient fertilizers, building better batteries, and creating new medicines.” 

Quantum computers, which can represent information as zero, one, or both at the same time, have been speculated to be more effective than classical computers at solving certain problems such as optimization, searching through an unsorted database, and simulating nature. But making a useful quantum computer has been hard, in part due to the delicate nature of qubits (the quantum equivalent of bits, the ones and zeros of classical computing). These qubits are super sensitive to noise, or disturbances from their surroundings, which can create errors in the calculations. As quantum processors get bigger, these small infractions can add up. 

[Related: How a quantum computer tackles a surprisingly difficult airport problem]

One way to get around the errors is to build a fault-tolerant quantum computer. The other is to somehow work with the errors by either mitigating them, correcting them, or canceling them out

In the experiment publicized this week, IBM researchers worked with a 127-qubit Eagle quantum processor to model the spin dynamics of a material to predict properties such as how it responds to magnetic fields. In this simulation, they were able to generate large, entangled states where certain simulated atoms are correlated with one another. By using a technique called zero noise extrapolation, the team was able to separate the noise and elucidate the true answer. To confirm that the answers they were getting from the quantum computer were reliable, another team of scientists at UC Berkeley performed these same simulations on a set of classical computers—and the two matched up. 

[Related: In photos: Journey to the center of a quantum computer]

However, classical computers have an upper limit when it comes to these types of problems, especially when the models become more complex. Although IBM’s quantum processor is still a ways away from quantum supremacy—where it can reliably outperform a classical computer on the same task—proving that it can provide useful answers even in the presence of noise is a notable accomplishment.  

“This is the first time we have seen quantum computers accurately model a physical system in nature beyond leading classical approaches,” Darío Gil, senior vice president and director of IBM Research, said in the press release. “To us, this milestone is a significant step in proving that today’s quantum computers are capable, scientific tools that can be used to model problems that are extremely difficult – and perhaps impossible – for classical systems, signaling that we are now entering a new era of utility for quantum computing.”

The post Quantum computers are starting to become more useful appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Google’s new AI will show how clothes look on different body types https://www.popsci.com/technology/google-try-on-generative-ai/ Wed, 14 Jun 2023 20:15:00 +0000 https://www.popsci.com/?p=548765
models posing
Google's new generative AI could change your online shopping experience. Google

It uses a technique called diffusion, but with a slight twist.

The post Google’s new AI will show how clothes look on different body types appeared first on Popular Science.

]]>
models posing
Google's new generative AI could change your online shopping experience. Google

Google is launching a fashion-related generative AI that aims to make virtual clothing try-ons more realistic. The company compares it to Cher’s closet preview tech in the movie “Clueless.” 

This new tool will first be available in the US for brands like Anthropologie, LOFT, H&M and Everlane. Products that you can use this feature on will be labeled with a “Try On” button. Google says it intends to extend this to more in the future. 

The tool doesn’t actually show how the clothes would look on you personally, but instead gives you a chance to find a model who you think does represent you physically. Building a tool that can mimic how real-life clothes drape, fold, cling, stretch and wrinkle starts with photographs of a range of real models with different body shapes and sizes. That way, shoppers can pick a model with a certain skin tone, or body type, and see how the outfit looks on that model. The centerpiece of the generative AI is a diffusion technique that combines properties in an image of a garment with another image of a person.  

[Related: A guide to the internet’s favorite generative AIs]

“Diffusion is the process of gradually adding extra pixels (or ‘noise’) to an image until it becomes unrecognizable — and then removing the noise completely until the original image is reconstructed in perfect quality,” Ira Kemelmacher-Shlizerman, senior staff research scientist at Google Shopping explained in a blog item. “Instead of using text as input during diffusion, we use a pair of images…Each image is sent to its own neural network (a U-net) and shares information with each other in a process called ‘cross-attention’ to generate the output: a photorealistic image of the person wearing the garment.”

AI photo
An illustration of the diffusion technique. Google

[Related: Bella Hadid’s spray-on dress was inspired by the science of silly string]

The tool, once it was built, was then trained on Google’s Shopping Graph, which houses some 35 billion products from retailers across the web. Researchers presented a paper describing this technique at the IEEE Conference on Computer Vision and Pattern Recognition this year. In their paper, the researchers also displayed how their images compared to current techniques like geometric warping and other virtual try-on tools. 

Of course, even though it offers a good visualization for how clothes would look on a model that’s similar or helpful to the shopper, it doesn’t promise a good fit and is only available for upper body clothing items. For pants and skirts, users will just have to wait for the next iteration. 


Google is also kicking off the summer with a collection of other new AI features across its platforms, including maps, lens, labs and more.

The post Google’s new AI will show how clothes look on different body types appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This lanternfly-egg-hunting robot could mean fewer bugs to squish https://www.popsci.com/technology/robot-kill-lanternfly/ Sat, 10 Jun 2023 11:00:00 +0000 https://www.popsci.com/?p=547640
Spotted lanternfly adult on leaf
That pop of color on the adult spotted lanternfly is a warning to predators—and property owners. Stephen Ausmus/USDA

It’s good to get them before they can fly away.

The post This lanternfly-egg-hunting robot could mean fewer bugs to squish appeared first on Popular Science.

]]>
Spotted lanternfly adult on leaf
That pop of color on the adult spotted lanternfly is a warning to predators—and property owners. Stephen Ausmus/USDA

It’s that time of the year again. The invasive, crop-damaging spotted lanternflies are emerging, as they typically do in springtime. You may already start to see some of the polka-dotted nymphs out and about. As with the adult lanternflies, the advice from experts is to kill them on sight

But another way to prevent these pests from spreading is to scrape off and kill the egg masses that these bugs leave on wood, vehicles, and furniture. Inspecting every tree and every surface for lanternfly eggs is no fun task. That’s why a team of undergraduate engineering students at Carnegie Mellon University programmed a robot, called TartanPest, to do it. 

TartanPest was designed as a part of the Farm Robotics Challenge, where teams of students had to design a creative add-on to the preexisting tractor-like farm-ng robot in order to tackle a problem in the food and agriculture industry. 

Engineering photo
TartanPest scrubbing an egg mass off a tree. Carnegie Mellon University

[Related: Taiwan sent mosquito-fighting robots into its sewers]

Since lanternflies voraciously munch on a variety of economically important crops like hardwoods, ornamentals, and grapevines, getting rid of them before they become a problem can save farms from potential damage. The solution from the team at Carnegie Mellon is a robot arm with a machine learning-powered vision system for spotting the egg masses, and an attachment that can brush them off. 

Engineering photo
TartanPest in the wild. Carnegie Mellon University

The machine learning model was trained with 700 images of lanternfly egg masses from the platform iNaturalist, where citizen scientists can upload photos of plant or wildlife observations they have made. 

Of course, TartanPest is not the first robot that helps humans not get their hands dirty (from murdering bugs). Making robots that can find and kill harmful pests on farms has long been a topic of discussion among engineers, as these could be key to decreasing the amount of pesticides used. Beyond crops, bug-terminating robots could have a place in households, too. 

But ask yourself this if you’re squeamish about robots designed to kill bugs: Would you rather have laser wielding robots snuff out cockroaches and mosquitoes, or would you prefer to suck it up and squish them yourself? 

Watch the robot at work: 

The post This lanternfly-egg-hunting robot could mean fewer bugs to squish appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Taiwan sent mosquito-fighting robots into its sewers https://www.popsci.com/technology/mosquito-killing-robot/ Fri, 09 Jun 2023 14:00:00 +0000 https://www.popsci.com/?p=547292
a tunnel in a sewer system
Mosquito larvae could be hiding out in sewers. Denny Müller / Unsplash

They were equipped with insecticide and hot water blasters.

The post Taiwan sent mosquito-fighting robots into its sewers appeared first on Popular Science.

]]>
a tunnel in a sewer system
Mosquito larvae could be hiding out in sewers. Denny Müller / Unsplash

Mosquitoes are a problem, especially when they’re carrying viruses like dengue and zika that they can pass to unsuspecting humans with a bite. Since there’s no vaccine against dengue, public health experts have to focus on controlling the blood-sucking critters themselves. When cities got big, mosquitoes, in search of standing water, took to the sewers to breed, making them harder to monitor. In response to this problem, a team of scientists at Taiwan National Mosquito-Borne Diseases Control Research Center had an idea: send in robots.

It’s a method that’s been tested by other countries, but often with flying robots that keep an eye on the ground below, and not with remote-controlled crawlers that snoop in sewers. In a new study published this week in PLOS Neglected Tropical Diseases, the team dispatched unmanned vehicles underground to scope out and eliminate mosquito larvae that have congregated in ditches under and around Kaohsiung city in southern Taiwan. After all, getting the pests before they develop wings is much easier than trying to catch them in the air. 

[Related: Spider robots could soon be swarming Japan’s aging sewer systems]

Engineering photo
Robots at work. Chen et. al, PLOS Neglected Tropical Disease

These multipurpose robots come with a suite of tools, digital cameras, and LED lights that help them visualize the sewer environment, detect mosquito larvae in areas with standing water, and either spray the area with insecticide or blast it with hot water. The wheeled robots crawl at a rate of 5 meters/min (that’s 16 feet each minute). They’re also designed in a way that prevents them from being overturned in areas where it would be difficult for humans to set them right again. The target for these robots are mosquitoes in the genus Aedes, which contain several species that commonly carry infectious tropical disease. 

[Related: Look inside London’s new Super Sewer, an engineering marvel for rubbish and poo]

To surveil mosquito activity around the ditches, scientists also set up a series of Gravitraps that can be used to lure and capture female mosquitoes. The team later analyzed these specimens to see where the dengue-positive mosquitoes tend to go. In many ditches where there were high concentrations of dengue-positive mosquitoes, after the robots were deployed, traps around them showed that the positivity rates for dengue dropped (probably because the mosquito population as a whole took a dip as well), indicating that the bots could be a useful tool for disease control and prevention. Of course, they could always be further improved. With better sensors, AI, mobility, and autonomy functions, these robots could become more usable and practical.

The post Taiwan sent mosquito-fighting robots into its sewers appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Yale’s new research tool will lead you down a rabbit hole of knowledge https://www.popsci.com/technology/yale-lux/ Mon, 05 Jun 2023 11:00:00 +0000 https://www.popsci.com/?p=545580
Yale Center for British Art in New Haven, CT.
Yale Center for British Art in New Haven, CT. Elizabeth Felicella

LUX is a digital platform that draws information from the university’s museums, libraries, and archives.

The post Yale’s new research tool will lead you down a rabbit hole of knowledge appeared first on Popular Science.

]]>
Yale Center for British Art in New Haven, CT.
Yale Center for British Art in New Haven, CT. Elizabeth Felicella

LUX, a free new tool from Yale University, is perfect for research projects where you want to be led down a rabbit hole of infinite connections adjacent to a subject of interest. It’s a central hub that contains 17 million searchable objects across Yale’s museums, archives, and libraries. 

The tool works somewhat like a search engine. However, search engines tend to return hits that then offer you links to travel onwards to a new site. LUX builds relationships between the object you’re searching for and other related objects in the collection. It goes beyond the objects themselves and finds obscure connections. For example, if you were searching for a piece of artwork, it would surface other works from the same author, as well as other art created around the same time or in the same location. Or, if you were to search for meteorites, it would pull up images of actual meteorites from the university’s museums, as well as art and books about meteorites. Previously you would have to go to different places—a natural history museum for the meteorites, and a library for books—or Google separate entries and piece together these different resources. 

At the heart of LUX is a backend data model called a knowledge graph. They’re usually made up of datasets from different sources and are a way of organizing that information into a network of relationships. You can think of it like the evidence pin-up-board detectives use to visualize the connections between people, objects, places, and events. The concept was arguably popularized by Google in 2012. Van Gogh World Wide operates off a similar data model. And the technique is only becoming increasingly popular in the art world as more works get digitized

[Related: Why researchers surveyed more than 1.1 billion objects across 73 museums]

“No one likes to search, everyone likes to find,” Robert Sanderson, senior director for digital cultural heritage at Yale University, said at a media briefing Thursday. LUX is able to provide robust context around the object that’s being searched for. 

When you enter a term into the search bar, tabs on the page separate the search into different categories: objects, works, people and groups, places, concepts, and events. The advanced search feature as well as filters on the side allow you to narrow down your search. When you click through to a page, there might be hyperlinks that can lead you to discover cross-connections. For example, if you click through to a link of an artwork, then onto the hyperlink of its painter, you will find more information about concepts influenced by this painter, their production timeline, related people and groups, and other works created by, or created about, the artist. 

The project to build this tool has been in the works for the past five years. And Yale hopes that by doing the heavy lifting, it can make it easier for other institutions to build their own version of LUX. As such, the code for LUX will be open-sourced. That means anyone can view the configurations on databases, as well as all the transformations Yale did on the data. The database that does the searching is proprietary, but can be licensed. There will be a smaller, similar database that will be more widely available for smaller institutions with fewer resources.

[Related: This new AI tool from Google could change the way we search online]

Importantly, LUX does not use artificial intelligence. Instead of using large language models, the team rely on human intelligence, meaning that they hired students to build out the depths of the metadata, and add identifiers to datasets within the collections over a span of six years.

According to Sanderson, the team did run some experiments with ChatGPT, asking it to find specific objects in collections. The AI would give an accession number and a url link for the query, but the link often didn’t work, and the number led to a completely different object. “The model understands how language works, but it’s not a knowledge model, it’s not a fact model,” he said. “You get answers that are convincing but wrong.”

The LUX that’s available to the public today is still a work in progress. Already, the team has ideas on how to improve it, and new features that they’re thinking of adding. You’ll notice that on the result pages, there will be a big blue button for user feedback if there’s an ethical issue or if the data is wrong for some reason.

The post Yale’s new research tool will lead you down a rabbit hole of knowledge appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Ford EVs can soon be charged at Tesla stations https://www.popsci.com/technology/ford-tesla-supercharger/ Mon, 29 May 2023 11:00:00 +0000 https://www.popsci.com/?p=544152
Tesla supercharging station.
Tesla plans to open its charging stations to other electric vehicles. Tesla

Mustang Mach-E, F-150 Lightning and E-Transit customers can start using adapters to plug into superchargers.

The post Ford EVs can soon be charged at Tesla stations appeared first on Popular Science.

]]>
Tesla supercharging station.
Tesla plans to open its charging stations to other electric vehicles. Tesla

Ford and Tesla have been rivals for years in the electric vehicle market, but a new agreement may change their relationship status. On Thursday, Ford said in a press release that its EV customers would be able to get access to 12,000 Tesla superchargers across the US and Canada by spring of next year. This will broaden the availability of charging stations by adding to the network of ​​10,000 DC fast-chargers and over 80,000 level-two chargers that Ford has been building out for the last decade. 

Most EVs on the market use the Combined Charging System (CCS) ports for fast charging. Teslas have a unique charging port called the North American Charging Standard (NACS), but its vehicle owners can use special adapters to charge at non-Tesla power stations. 

Pre-2021, it meant that Teslas could charge at public power stations, but no other EVs could charge at a Tesla station. However, starting in November 2021, Tesla started making some (but not all) of its superchargers open to non-Tesla EVs through a “Magic Dock” adapter. Drivers who wanted to use this still had to download the Tesla app on their phones in order to make it work. The Ford partnership will change that process, making things easier for people driving vehicles like the Mach-E or F-150 Lightning.  

“Mustang Mach-E, F-150 Lightning and E-Transit customers will be able to access the Superchargers via an adapter and software integration along with activation and payment via FordPass or Ford Pro Intelligence,” the company said. “In 2025, Ford will offer next-generation electric vehicles with the North American Charging Standard (NACS) connector built-in, eliminating the need for an adapter to access Tesla Superchargers.”

[Related: Electric cars are better for the environment, no matter the power source]

As EVs become more commonplace, charging availability and range anxiety become understandable concerns for many owners. The only way to relieve that is to build a charging infrastructure that parallels the distribution of gas stations across the country. The Biden Administration has made building public chargers a priority, and last fall, the Department of Transportation said that it had signed off on the EV charging plans for all US states, as well as DC and Puerto Rico. States like Michigan and Indiana have even come up with ambitious plans to make wireless charging possible through special roadway systems

When it comes to smoothing over the potholes in the way of EV adoption in the US, more accessible chargers are never a bad thing. Tesla, having led the EV game for so long, seems like it’s finally ready to share its resources for the greater good. “Essentially, the idea is that we don’t want the Tesla Supercharger network to be like a walled garden. We want it to be something that is supportive of electrification and sustainable transport in general,” Tesla CEO Elon Musk said Thursday in Twitter Spaces, as reported by TechCrunch.  

“It seems totally ridiculous that we have an infrastructure problem, and we can’t even agree on what plug to use,” Ford CEO Jim Farley said at a Morgan Stanley conference, CNBC reported. “I think the first step is to work together in a way we haven’t, probably with the new EV brands and the traditional auto companies.”

The post Ford EVs can soon be charged at Tesla stations appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Electric cars are better for the environment, no matter the power source https://www.popsci.com/technology/are-electric-cars-better-for-the-environment/ Fri, 26 May 2023 14:00:00 +0000 https://www.popsci.com/?p=543822
Ioniq 6 EV
An Ioniq 6 electric vehicle. Hyundai

Experts say that across the board, EVs are a win compared to similar gas-powered vehicles.

The post Electric cars are better for the environment, no matter the power source appeared first on Popular Science.

]]>
Ioniq 6 EV
An Ioniq 6 electric vehicle. Hyundai

These days, it seems like every carmaker—from those focused on luxury options to those with an eye more toward the economical—is getting into electric vehicles. And with new US policies around purchasing incentives and infrastructure improvements, consumers might be more on board as well. But many people are still concerned about whether electric vehicles are truly better for the environment overall, considering certain questions surrounding their production process

Despite concerns about the pollution generated from mining materials for batteries and the manufacturing process for the EVs themselves, the environmental and energy experts PopSci spoke to say that across the board, electric vehicles are still better for the environment than similar gasoline or diesel-powered models. 

When comparing a typical commercial electric vehicle to a gasoline vehicle of the same size, there are benefits across many different dimensions

“We do know, for instance, if we’re looking at carbon dioxide emissions, greenhouse gas emissions, that electric vehicles operating on the typical electric grid can end up with fewer greenhouse gas emissions over the life of their vehicle,” says Dave Gohlke, an energy and environmental analyst at Argonne National Lab. “The fuel consumption (using electricity to generate the fuel as opposed to burning petroleum) ends up releasing fewer emissions per mile and over the course of the vehicle’s expected lifetime.”

[Related: An electrified car isn’t the same thing as an electric one. Here’s the difference.]

How the electricity gets made

With greenhouse gas emissions, it’s also worth considering how the electricity for charging the EV is generated. Electricity made by a coal- or oil-burning plant will have higher emissions compared to a natural gas plant, while nuclear and renewable energy will have the fewest emissions. But even an electric vehicle that got its juice from a coal plant tends to have fewer emissions compared to a gasoline vehicle of the same size, Gohlke says. “And that comes down to the fact that a coal power plant is huge. It’s able to generate electricity at a better scale, [be] more efficient, as opposed to your relatively small engine that fits in the hood of your car.” Power plants could additionally have devices in place to scrub their smokestacks or capture some of the emissions that arise.  

EVs also produce no tailpipe emissions, which means reductions in particulate matter or in smog precursors that contribute to local air pollution.

“The latest best evidence right now indicates that in almost everywhere in the US, electric vehicles are better for the environment than conventional vehicles,” says Kenneth Gillingham, professor of environmental and energy economics at Yale School of the Environment. “How much better for the environment depends on where you charge and what time you charge.”

Electric motors tend to be more efficient compared to the spark ignition engine used in gasoline cars or the compression ignition engine used in diesel cars, where there’s usually a lot of waste heat and wasted energy.

Let’s talk about EV production

“It’s definitely the case that any technology has downsides. With technology you have to use resources, [the] raw materials we have available, and convert them to a new form,” says Jessika Trancik, a professor of data, systems, and society at the Massachusetts Institute of Technology. “And that usually comes with some environmental impacts. No technology is perfect in that sense, but when it comes to evaluating a technology, we have to think of what services it’s providing, and what technology providing the same service it’s replacing.”

Creating an EV produces pollution during the manufacturing process. “Greenhouse gas emissions associated with producing an electric vehicle are almost twice that of an internal combustion vehicle…that is due primarily to the battery. You’re actually increasing greenhouse gas emissions to produce the vehicle, but there’s a net overall lifecycle benefit or reduction because of the significant savings in the use of the vehicle,” says Gregory Keoleian, the director of the Center for Sustainable Systems at the University of Michigan. “We found in terms of the overall lifecycle, on average, across the United States, taking into account temperature effects, grid effects, there was 57 percent reduction in greenhouse gas emissions for a new electric vehicle compared to a new combustion engine vehicle.” 

In terms of reducing greenhouse gas emissions associated with operating the vehicles, fully battery-powered electric vehicles were the best, followed by plug-in hybrids, and then hybrids, with internal combustion engine vehicles faring the worst, Keoleian notes. Range anxiety might still be top of mind for some drivers, but he adds that households with more than one vehicle can consider diversifying their fleet to add an EV for everyday use, when appropriate, and save the gas vehicle (or the gas feature on their hybrids) for longer trips.

The breakeven point at which the cost of producing and operating an electric vehicle starts to gain an edge over a gasoline vehicle of similar make and model occurs at around two years in, or around 20,000 to 50,000 miles. But when that happens can vary slightly on a case-by-case basis. “If you have almost no carbon electricity, and you’re charging off solar panels on your own roof almost exclusively, that breakeven point will be sooner,” says Gohlke. “If you’re somewhere with a very carbon intensive grid, that breakeven point will be a little bit later. It depends on the style of your vehicle as well because of the materials that go into it.” 

[Related: Why solid-state batteries are the next frontier for EV makers]

For context, Gohlke notes that the average EV age right now is around 12 years old based on registration data. And these vehicles are expected to drive approximately 200,000 miles over their lifetime. 

“Obviously if you drive off your dealer’s lot and you drive right into a light pole and that car never takes more than a single mile, that single vehicle will have had more embedded emissions than if you had wrecked a gasoline car on your first drive,” says Gohlke. “But if you look at the entire fleet of vehicles, all 200-plus-million vehicles that are out there and how long we expect them to survive, over the life of the vehicle, each of those electric vehicles is expected to consume less energy and emit lower emissions than the corresponding gas vehicle would’ve been.”

To put things in perspective, Gillingham says that extracting and transporting fossil fuels like oil is energy intensive as well. When you weigh those factors, electric vehicle production doesn’t appear that much worse than the production of gasoline vehicles, he says. “Increasingly, they’re actually looking better depending on the battery chemistry and where the batteries are made.” 

And while it’s true that there are issues with mines, the petrol economy has damaged a lot of the environment and continues to do so. That’s why improving individual vehicle efficiency needs to be paired with reducing overall consumption.

EV batteries are getting better

Mined materials like rare metals can have harmful social and environmental effects, but that’s an economy-wide problem. There are many metals that are being used in batteries, but the use of metals is nothing new, says Trancik. Metals can be found in a range of household products and appliances that many people use in their daily lives. 

Plus, there have been dramatic improvements in battery technology and the engineering of the vehicle itself in the past decade. The batteries have become cheaper, safer, more durable, faster charging, and longer lasting. 

“There’s still a lot of room to improve further. There’s room for improved chemistry of the batteries and improved packaging and improved coolant systems and software that manages the batteries,” says Gillingham.

The two primary batteries used in electric vehicles today are NMC (nickel-manganese-cobalt) and LFP (lithium-ferrous-phosphate). NMC batteries tend to use more precious metals like cobalt from the Congo, but they are also more energy dense. LFP uses more abundant metals. And although the technology is improving fast, it’s still in an early stage, sensitive to cold weather, and not quite as energy dense. LFP tends to be good for utility scale cases, like for storing electricity on the grid. 

[Related: Could swappable EV batteries replace charging stations?]

Electric vehicles also offer an advantage when it comes to fewer trips to the mechanic; conventional vehicles have more moving parts that can break down. “You’re more likely to be doing maintenance on a conventional vehicle,” says Gillingham. He says that there have been Teslas in his studies that are around eight years old, with 300,000 miles on them, which means that even though the battery does tend to degrade a little every year, that degradation is fairly modest.

Eventually, if the electric vehicle markets grow substantially, and there’s many of these vehicles in circulation, reusing the metals in the cars can increase their benefits. “This is something that you can’t really do with the fossil fuels that have already been combusted in an internal combustion engine,” says Trancik. “There is a potential to set up that circularity in the supply chain of those metals that’s not readily done with fossil fuels.”

Since batteries are fairly environmentally costly, the best case is for consumers who are interested in EVs to get a car with a small battery, or a plug-in hybrid electric car that runs on battery power most of the time. “A Toyota Corolla-sized car, maybe with some hybridization, could in many cases, be better for the environment than a gigantic Hummer-sized electric vehicle,” says Gillingham. (The charts in this New York Times article help visualize that distinction.) 

Where policies could help

Electric vehicles are already better for the environment and becoming increasingly better for the environment. 

The biggest factor that could make EVs even better is if the electrical grid goes fully carbon free. Policies that provide subsidies for carbon-free power, or carbon taxes to incentivize cleaner power, could help in this respect. 

The other aspect that would make a difference is to encourage more efficient electric vehicles and to discourage the production of enormous electric vehicles. “Some people may need a pickup truck for work. But if you don’t need a large car for an actual activity, it’s certainly better to have a more reasonably sized car,” Gillingham says.  

Plus, electrifying public transportation, buses, and vehicles like the fleet of trucks run by the USPS can have a big impact because of how often they’re used. Making these vehicles electric can reduce air pollution from idling, and routes can be designed so that they don’t need as large of a battery.  

“The rollout of EVs in general has been slower than demand would support…There’s potentially a larger market for EVs,” Gillingham says. The holdup is due mainly to supply chain problems

Switching over completely to EVs is, of course, not the end-all solution for the world’s environmental woes. Currently, car culture is very deeply embedded in American culture and consumerism in general, Gillingham says, and that’s not easy to change. When it comes to climate policy around transportation, it needs to address all the different modes of transportation that people use and the industrial energy services to bring down greenhouse gas emissions across the board. 

The greenest form of transportation is walking, followed by biking, followed by using public transit. Electrifying the vehicles that can be electrified is great, but policies should also consider the ways cities are designed—are they walkable, livable, and have a reliable public transit system connecting communities to where they need to go? 

“There’s definitely a number of different modes of transport that need to be addressed and green modes of transport that need to be supported,” says Trancik. “We really need to be thinking holistically about all these ways to reduce greenhouse gas emissions.”

The post Electric cars are better for the environment, no matter the power source appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Spy tech and rigged eggs help scientists study the secret lives of animals https://www.popsci.com/technology/oregon-zoo-sensor-condor-egg/ Mon, 22 May 2023 11:00:00 +0000 https://www.popsci.com/?p=542389
eggs in a nest
The Oregon Zoo isn't putting all its eggs in a basket when it comes to condor conservation. The Dark Queen / Unsplash

The field of natural sciences has been embracing sensors, cameras, and recorders packaged in crafty forms.

The post Spy tech and rigged eggs help scientists study the secret lives of animals appeared first on Popular Science.

]]>
eggs in a nest
The Oregon Zoo isn't putting all its eggs in a basket when it comes to condor conservation. The Dark Queen / Unsplash

Last week, The New York Times went backstage at the Oregon Zoo for an intimate look at the fake eggs the zoo was developing as a part of its endangered Condor nursery program. 

The idea is that caretakers can swap out the real eggs the birds lay for smart egg spies that look and feel the same. These specially designed, 3D-printed eggs have been equipped with sensors that can monitor the general environment of the nest and the natural behaviors of the California condor parents (like how long they sat on the egg for, and when they switched off between parents). 

In addition to recording data related to surrounding temperature and movement, there’s also a tiny audio recorder that can capture ambient sounds. So what’s the use of the whole charade? 

The Oregon Zoo’s aim is to use all the data gathered by the egg to better recreate natural conditions within their artificial incubators, whether that has to do with adjusting the temperatures they set these machines to, integrating periodic movements, or play back the sounds from the nest, which will ideally improve the outcomes from its breeding efforts. And it’s not the only group tinkering with tech like this.

A ‘spy hippo’

This setup at the Oregon Zoo may sound vaguely familiar to you, if you’ve been a fan of the PBS show “Spy in the Wild.” The central gag of the series is that engineers craft hyper-realistic robots masquerading as animals, eggs, boulders, and more to get up close and personal with a medley of wildlife from all reaches of the planet. 

[Related: Need to fight invasive fish? Just introduce a scary robot]

If peeking at the inner lives of zoo animals is a task in need of an innovative tech solution, imagine the challenges of studying animals in their natural habitats, in regions that are typically precarious or even treacherous for humans to visit. Add on cameras and other heavy equipment, and it becomes an even more demanding trip. Instead of having humans do the Jane Goodall method of community immersion with animals, these spies in disguise can provide invaluable insights into group or individual behavior and habits without being intrusive or overly invasive to their ordinary way of life.  

A penguin rover

Testing unconventional methods like these is key for researchers to understand as much as they can about endangered animals, since scientists have to gather important information in a relatively short time frame to help with their conservation. 

[Related: Open data is a blessing for science—but it comes with its own curses

To prove that these inventions are not all gimmick and have some practical utility, a 2014 study in Nature showed that a penguin-shaped rover can get more useful data on penguin colonies than human researchers, whose presence elevated stress levels in the animals. 

The point of all this animal espionage?

Minimizing the effects created by human scientists has always been a struggle in behavioral research for the natural sciences. Along with the advancement of other technologies like better cameras and more instantaneous data transfer, ingenious new sensor devices like the spy eggs are changing the field itself. The other benefit is that every once in a while, non-scientist humans can also be privy to the exclusive access provided into the secret lives of these critters, like through “Spy in the Wild,” and use these as portals for engaging with the world around them.

The post Spy tech and rigged eggs help scientists study the secret lives of animals appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
An inside look at the data powering McLaren’s F1 team https://www.popsci.com/technology/mclaren-f1-data-technology/ Tue, 16 May 2023 19:00:00 +0000 https://www.popsci.com/?p=541361
McLaren's F1 race car
McLaren’s F1 race car, seen here in the garage near the track, belonging to driver Oscar Piastri. McLaren

Go behind the scenes at the Miami Grand Prix and see how engineers prep for the big race.

The post An inside look at the data powering McLaren’s F1 team appeared first on Popular Science.

]]>
McLaren's F1 race car
McLaren’s F1 race car, seen here in the garage near the track, belonging to driver Oscar Piastri. McLaren

Formula 1, a 70-year old motorsport, has recently undergone a cultural renaissance. That renaissance has been fueled in large part by the growing popularity of the glitzy, melodrama-filled Netflix reality series, “Drive To Survive,” which Mercedes team principal Toto Wolff once said was closer to the fictional “Top Gun” than a documentary. Relaxed social media rules after F1 changed owners also helped provide a look into the interior lives of drivers-turned-new-age-celebrities. 

As a result, there’s been an explosion of interest among US audiences, which means more eyeballs and more ticket sales. Delving into the highly technical world of F1 can be daunting, so here are the basics to know about the design of the sport—plus an inside look at the complex web of communications and computer science at work behind the scenes. 

Data and a new era of F1

Increasingly, Formula 1 has become a data-driven sport; this becomes evident when you look into the garages of modern F1 teams. 

“It started really around 60, 70 years ago with just a guy with a stopwatch, figuring out which was the fastest lap—to this day and age, having every car equipped with sensors that generate around 1.1 million data points each second,” says Luuk Figdor, principal sports tech advisor with Amazon Web Services (AWS), which is a technology partner for F1. “There’s a huge amount of data that’s being created, and that’s per car.” Part of AWS’ job is to put this data in a format that is understandable not only to experts, but also to viewers at home, with features like F1 Insights.

There was a time where cars had unreliable radios, and engineers could only get data on race performance at the very end. Now, things look much more different. Every car is able to send instantaneous updates on steering, G-force, speed, fuel usage, engine and tire status, gear status and much more. Around the track itself, there are more accurate ways for teams to get GPS data on the car positions, weather data, and timing data. 

“This is data from certain sensors that are drilled into the track before the race and there’s also a transponder in the car,” Figdor explains. “And whenever the car passes the sensor, it sends a signal. Based on those signals you can calculate how long it took for a car to pass a certain section of the track.” 

These innovations have made racing more competitive over the years, and made the margins in speed between some of the cars much closer. Fractions of seconds can divide cars coming in first or second place.

F1 101

For newbies, here’s a quick refresher on the rules of the game. Twenty international drivers from 10 teams compete for two championships: the Driver’s Championship and the Constructors’ Championship.

Pre-season testing starts in late February, and racing spans from March to November. There are 20 or so races at locations around the world, and each race is around 300 km (186 miles), which equals 50 to 70 laps (except for the Monaco circuit, which is shorter). Drivers get points for finishing high in the order—those who place 10th or below get no points. Individuals with the highest points win the Driver’s Championship, and teams with the highest points win the Constructors’ Championship. 

A good car is as essential for winning as a good driver. And an assortment of engineers are crucial for ensuring that both the driver and the car are performing at their best. In addition to steering and shifting gears, drivers can control many other settings like engine power and brake balance. Races are rain or shine, but special tires are often required for wet roads. Every team is required to build certain elements of their car, including the chassis, from scratch (they are allowed to buy engines from other suppliers). The goal is to have a car with low air resistance, high speed, low fuel consumption, and good grip on the track. Most cars can reach speeds of around 200 mph. Certain engineering specifications create the downward lift needed to keep the cars on the ground. 

Technical regulations from the FIA contain rules about how the cars can be built—what’s allowed and not allowed. Rules can change from season to season, and teams tend to refresh their designs each year. Every concept undergoes thorough aerodynamic and road testing, and modifications can be made during the season. 

The scene backstage before a race weekend

It’s the Thursday before the second-ever Miami Grand Prix. In true Florida fashion, it’s sweltering. The imposing Hard Rock Stadium in Miami Gardens has been transformed into a temporary F1 campus in preparation for race weekend, with the race track wrapping around the central arena and its connected lots like a metal-guarded moat. Bridges take visitors in and out of the stadium. The football field that is there normally has been turned into a paddock park, where the 10 teams have erected semi-permanent buildings that act as their hubs during the week. 

Setting up everything the 10 teams need ahead of the competition is a whole production. Some might even call it a type of traveling circus

AI photo
The paddock park inside the football field of the Hard Rock Stadium. Charlotte Hu

Ed Green, head of commercial technology for McLaren, greets me in the team’s temporary building in the paddock park. He’s wearing a short-sleeved polo in signature McLaren orange, as is everyone else walking around or sitting in the space. Many team members are also sporting what looks like a Fitbit, likely part of the technology partnership they have with Google. The partnership means that the team will also use Android connected devices and equipment—including phones, tablets and earbuds—as well as the different capabilities provided by Chrome. 

McLaren has developed plenty of custom web applications for Formula 1. “We don’t buy off-the-shelf too much, in the past two years, a lot of our strategy has moved to be on web apps,” Green says. “We’ve developed a lot into Chrome, so the team have got really quick, instant access…so if you’re on the pit wall looking at weather data and video systems, you could take that with you on your phone, or onto the machines back in the engineering in the central stadium.” 

AI photo
The entrance to McLaren’s garage. Charlotte Hu

This season, there are 23 races. This structure that’s been built is their hub for flyaway races, or races that they can’t drive to from the factory. The marketing, the engineers, the team hospitality, and the drivers all share the hub. The important points in space—the paddock, garage, and race track—are linked up through fiber optic cables. 

“This is sort of the furthest point from the garage that we have to keep connected on race weekend,” Green says. “They’ll be doing all the analysis of all the information, the systems, from the garage.”

To set up this infrastructure so it’s ready to transmit and receive data in time for when the cars hit the track, an early crew of IT personnel have to arrive the Saturday before to run the cabling, and get the basics in order. Then, the wider IT team arrives on Wednesday, and it’s a mad scramble to get the rest of what they need stood up so that by Thursday lunchtime, they can start running radio checks and locking everything down. 

“We fly with our IT rig, and that’s because of the cost and complexity of what’s inside it. So we have to bring that to every race track with us,” says Green. The path to and from the team hub to the garages involves snaking in and out of corridors, long hallways and lobbies under the stadium. As we enter McLaren’s garage, we first come across a wall of headsets, each with a name label underneath, including the drivers and each of their race engineers. This is how members of the team stay in contact with one another. 

AI photo
Headsets help team members stay connected. Charlotte Hu

The garage, with its narrow hallway, opens in one direction into the pit. Here you can see the two cars belonging to McLaren drivers Lando Norris and Oscar Piastri being worked on by engineers, with garage doors that open onto the race track. The two cars are suspended in various states of disassembly, with mechanics examining and tweaking them like surgeons at an operating table. The noise of drilling, whirring, and miscellaneous clunking fills the space. There are screens everywhere, running numbers and charts. One screen has the local track time, a second is running a countdown clock until curfew tonight. During the race, it will post video feeds from the track and the drivers, along with social media feeds. 

McLaren team members work on the Lando Norris McLaren MCL60 in the garage
McLaren team members work on the Lando Norris’ McLaren MCL60 in the garage. McLaren

We step onto a platform viewing area overlooking the hubbub. On the platform, there are two screens: one shows the mission control room back in England, and the other shows a diagram of the race circuit as a circle. “We look at the race as a circle, and that’s because it helps us see the gaps between the cars in time,” Green says. “Looking through the x, y, z coordinates is useful but actually they bunch up in the corners. Engineers like to see gaps in distances.” 

“This is sort of home away from home for the team. This is where we set up our garage and move our back office central services as well as engineering,” he notes. “We’re still in construction.”

From Miami to mission control in Woking

During race weekend, the mission control office in England, where McLaren is based, has about 32 people who are talking to the track in near real time. “We’re running just over 100 milliseconds from here in Miami back to base in Woking. They will get all the data feeds coming from these cars,” Green explains. “If you look at the team setting up the cars, you will see various sensors on the underside of the car. There’s an electronic control unit that sits under the car. It talks to us as the cars go around track. That’s regulated by the FIA. We cannot send information to the car but we can receive information from the car. Many, many years ago that wasn’t possible.”

For the Miami Grand Prix, Green estimates that McLaren will have about 300 sensors on each car for pressure taps (to measure airflow), temperature reading, speed checks across the car, and more. “There’s an enormous amount of information to be seen,” Green says. “From when we practice, start racing, to when we finish the race, we generate just about 1.5 terabytes of information from these two cars. So it’s a huge amount of information.” 

[Related: Inside the search for the best way to save humanity’s data]

Because the data comes in too quickly for any one person to handle, machine learning algorithms and neural networks in the loop help engineers spot patterns or irregularities. These software help package the information into a form that can be used to make decisions like when a car should switch tires, push up their speed, stay out, or make a pit stop. 

“It’s such a data-driven sport, and everything we do is founded on data in the decision-making, making better use of digital twins, which has been part of the team for a long time,” Green says. Digital twins are virtual models of objects that are based off of scanned information. They’re useful for running simulations. 

Throughout the race weekend, McLaren will run around 200 simulations to explore different scenarios such as what would happen if the safety car came out to clear debris from a crash, or if it starts raining. “We’ve got an incredibly smart team, but when you have to make a decision in three seconds, you’ve got to have human-in-the-loop technology to feed you what comes next as well,” Green says. “It’s a lot of fun.” 

[Related: Can software really define a vehicle? Renault and Google are betting on it.]

Improved computing resources and better simulation technology has helped change the sport as a whole too. Not only does it reduce the cost of testing design options (important because of the new cost cap rule that puts a ceiling on how much teams are allowed to spend on designing and building their cars), it also informs new rules for racing.  

“One of the things pre-2022, the way that the cars were designed resulted in the fact it was really hard to follow another car closely. And this is because of the aerodynamics of the car,” Figdor says. When a car zooms down the track, it distorts the air behind it. It’s like how a speedboat disrupts the water it drives through. And if you try to follow a speedboat with another speedboat in the lake, you will find that it’s quite tricky. 

“The same thing happens with Formula 1 cars,” says Figdor. “What they did in 2022 is they came up with new regulations around the design of the car that should make it easier for cars to follow each other closely on the track.”

That was possible because F1 and AWS were able to create and run realistic, and relatively fast simulations more formally called “two-car Computational Fluid Dynamics (CFD) aerodynamic simulations” that were able to measure the effects of various cars with different designs following each other in a virtual wind tunnel. “Changing regulations like that, you have to be really sure of what you’re doing. And using technology, you can just estimate many more scenarios at just a fraction of the cost,” Figdor says. 

Making sure there’s not too many engineers in the garage

The pit wall bordering the race track may be the best seat in the house, but the engineering island is one of the most important. It sits inside the garage, cramped between the two cars. Engineers from both sides of the garage will have shared resources there to look at material reliability and car performance. The engineering island is connected to the pit wall and also to a stack of servers and an IT tower tucked away in a corner of the garage. The IT tower, which has 140 terabytes of storage, 4.5 terabytes of memory, 172 logical processors, and many many batteries, keeps the team in communication with the McLaren Technology Center.  

McLaren engineers speak in the garage
McLaren engineers at the engineering island in the middle of the garage. McLaren

All the crew on the ground in Miami, about 80 engineers, make up around 10 percent of the McLaren team. It’s just the tip of the iceberg. The team of engineers at large work in three umbrella categories: design, build, and race. 

[Related: Behind the wheel of McLaren’s hot new hybrid supercar, the Artura]

AI photo
McLaren flies their customized IT rig out to every race. McLaren

The design team will use computers to mock up parts in ways that make them lighter, more structurally sound, or give more performance. “Material design is part of that, you’ll have aerodynamicists looking at how the car’s performing,” says Green. Then, the build team will take the 3D designs, and flatten them into a pattern. They’ll bring out rolls of carbon fiber that they store in a glass chiller, cut out the pattern, laminate it, bind different parts together, and put it into a big autoclave or oven. As part of that build process, a logistics team will take that car and send it out to the racetrack and examine how it drives. 

Formula 1 cars can change dramatically from the first race of the season to the last. 

“If you were to do nothing to the car that wins the first race, it’s almost certain to come last at the end of the season,” Green says. “You’ve got to be constantly innovating. Probably about 18 percent of the car changed from when we launched it in February to now. And when we cross that line in Abu Dhabi, probably 80 percent of the car will change.” 

There’s a rotating roster of engineers at the stadium and in the garage on different days of race week. “People have got very set disciplines and you also hear that on the radio as well. It’s the driver’s engineers that are going to listen to everything and they’re going to be aware of how the car’s set up,” Green says. “But you have some folks in aerodynamics on Friday, Saturday, particularly back in Woking. That’s so important now in modern F1—how you set the car up, the way the air is performing—so you can really over-index and make sure you’ve got more aerodynamic expertise in the room.”

The scene on Sunday

On race day, the makeup of engineers is a slightly different blend. There are more specialists focused on competitor intelligence, analysis, and strategy insight. Outside of speed, the data points they are really interested in are related to the air pressures and the air flows over the car. 

“Those things are really hard to measure and a lot of energy goes into understanding that. Driver feedback is also really important, so we try to correlate that feedback here,” Green says. “The better we are at correlating the data from our virtual wind tunnel, our physical wind tunnel, the manufacturing parts, understanding how they perform on the car, the quicker we can move through the processes and get upgrades to the car. Aerodynamics is probably at the moment the key differentiator between what teams are doing.” 

As technology advances, and partners work on more interesting products in-house, some of the work is sure to translate over to F1. Green says that there are some exciting upcoming projects looking at if Google could help them apply speech-to-text software to transcribe driver radios from other teams during the races—work that’s currently being done by human volunteers.

The post An inside look at the data powering McLaren’s F1 team appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Why the EU wants to build an underwater cable in the Black Sea https://www.popsci.com/technology/eu-georgia-undersea-cable/ Mon, 15 May 2023 11:00:00 +0000 https://www.popsci.com/?p=541041
Illustration of a submarine communications cable.
Illustration of a submarine communications cable. DEPOSIT PHOTOS

According to reports, this effort will reduce reliance on communications infrastructure that runs through Russia.

The post Why the EU wants to build an underwater cable in the Black Sea appeared first on Popular Science.

]]>
Illustration of a submarine communications cable.
Illustration of a submarine communications cable. DEPOSIT PHOTOS

Since 2021, the EU and the nation of Georgia have highlighted a need to install an underwater internet cable through the Black Sea to improve the connectivity between Georgia and other European countries. 

After the start of war in Ukraine, the project has garnered increased attention as countries in the South Caucasus region have been working to decrease their reliance on Russian resources—a trend that goes for energy as well as communications infrastructure. Internet cables have been under scrutiny because they could be tapped into by hackers or governments for spying

“Concerns around intentional sabotage of undersea cables and other maritime infrastructure have also grown since multiple explosions on the Nord Stream gas pipelines last September, which media reports recently linked to Russian vessels,” the Financial Times reported. The proposed cable, which will cross international water through the Black Sea, will be 1,100 kilometers, or 684 miles long, and will link the Caucasus nations to EU member states. It’s estimated to cost €45 million (approximately $49 million). 

[Related: An undersea cable could bring speedy internet to Antarctica]

“Russia is one of multiple routes through which data packages move between Asia and Europe and is integral to connectivity in some parts of Asia and the Caucasus, which has sparked concern from some politicians about an over-reliance on the nation for connectivity,” The Financial Times reported. 

Across the dark depths of the globe’s oceans there are 552 cables that are “active and planned,” according to TeleGeography. All together, they may measure nearly 870,000 miles long, the company estimates. Take a look at a map showing existing cables, including in the Black Sea area, and here’s a bit more about how they work.

[Related: A 10-million-pound undersea cable just set an internet speed record]

The Black Sea cable is just one project in the European Commission’s infrastructure-related Global Gateway Initiative. According to the European Commission’s website, “the new cable will be essential to accelerate the digital transformation of the region and increase its resilience by reducing its dependency on terrestrial fibre-optic connectivity transiting via Russia. In 2023, the European Investment Bank is planning to submit a proposal for a €20 million investment grant to support this project.”

Currently, the project is still in the feasibility testing stage. While the general route and the locations for the converter stations have already been selected, it will have to go through geotechnical and geophysical studies before formal construction can go forward.

The post Why the EU wants to build an underwater cable in the Black Sea appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Google previews an AI-powered future at I/O 2023 https://www.popsci.com/technology/google-io-generative-ai/ Wed, 10 May 2023 19:17:52 +0000 https://www.popsci.com/?p=540376
Google I/O presentation about their updated language model named Gecko.
One of Google's language models is getting a big upgrade. Google / YouTube

It’s a language model takeover.

The post Google previews an AI-powered future at I/O 2023 appeared first on Popular Science.

]]>
Google I/O presentation about their updated language model named Gecko.
One of Google's language models is getting a big upgrade. Google / YouTube

Google’s annual I/O developer’s conference highlights all the innovative work the tech giant is doing to improve its large family of products and services. This year, the company emphasized that it is going big on artificial intelligence, especially generative AI. Expect to see more AI powered features coming your way across a range of key services in Google’s Workspace, apps, and Cloud. 

“As an AI-first company, we’re at an exciting inflection point…We’ve been applying AI to our products to make them radically more helpful for a while. With generative AI, we’re taking the next step,” Sundar Pichai, CEO of Google and Alphabet, said in the keynote. “We are reimagining all our core products, including search.”

Here’s a look at what’s coming down the AI-created road.

Users will soon be able to work alongside generative AI to edit their photos, create images for their Slides, analyze data in Sheets, craft emails in Gmail, make backgrounds in Meet, and even get writing assistance in Docs. It’s also applying AI to help translations by matching lip movements with words, so that a person speaking in English could have their words translated into Spanish—with their lip movements tweaked to match. To help users discern what content generative AI has touched, the company said that it’s working on creating special watermarks and metadata notes for synthetic images as part of its responsible AI effort.  

The foundation of most of Google’s new announcements is the unveiling of its upgrade to a language model called PaLM, which has previously been used to answer medical questions typically posed to human physicians. PaLM 2, the next iteration of this model, promises to be faster and more efficient than its predecessor. It also comes in four sizes, from small to large, called Gecko, Otter, Bison, and Unicorn. The most lightweight model, Gecko, could be a good fit to use for mobile devices and offline modes. Google is currently testing this model on the latest phones. 

[Related: Google’s AI has a long way to go before writing the next great novel]

PaLM 2 is more multilingual and better at reasoning too, according to Google. The company says that a lot of scientific papers and math expressions have been thrown into its training dataset to help it with logic and common sense. And it can tackle more nuanced text like idioms, poems, and riddles. PaLM 2 is being applied to medicine, cybersecurity analysis, and more. At the moment, it also powers 25 Google products behind the scenes. 

“PaLM 2’s models shine when fine-tuned on domain-specific data. (BTW, fine tuning = training an AI model on examples specific to the task you want it to be good at,)” Google said in a tweet

A big reveal is that Google is now making its chatbot, Bard, available to the general public. It will be accessible in over 180 countries, and will soon support over 40 different languages. Bard has been moved to the upgraded PaLM 2 language model, so it should carry over all the improvements in capabilities. To save information generated with Bard, Google will make it possible to export queries and responses issued through the chatbot to Google Docs or Gmail. And if you’re a developer using Bard for code, you can export your work to Replit.

In essence, the theme of today’s keynote was clear: AI is helping to do everything, and it’s getting increasingly good at creating text, images, and handling complex queries, like helping someone interested in video games find colleges in a specific state that might have a major they’re interested in pursuing. But like Google search, Bard is constantly evolving and becoming more multimodal. At Google, they aim to soon make Bard include images in its responses and prompts through Google Lens. The company is actively working on integrating Bard with external applications like Adobe as well as a wide variety of tools, services, and extensions.

The post Google previews an AI-powered future at I/O 2023 appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
To build a better crawly robot, add legs—lots of legs https://www.popsci.com/technology/centipede-robot-georgia-tech/ Mon, 08 May 2023 11:00:00 +0000 https://www.popsci.com/?p=539360
centipede robot
The centipede robot from Georgia Tech is a rough terrain crawler. Georgia Institute of Technology

Researchers hope that more limbs will allow them to have fewer sensors.

The post To build a better crawly robot, add legs—lots of legs appeared first on Popular Science.

]]>
centipede robot
The centipede robot from Georgia Tech is a rough terrain crawler. Georgia Institute of Technology

When traveling on rough and unpredictable roads, the more legs the better—at least for robots. Balancing on two legs is somewhat hard; on four legs, it’s slightly easier. But what if you had many many legs, like a centipede? Researchers at Georgia Institute of Technology have found that by giving a robot multiple, connected legs, it allows the machine to easily clamber over landscapes with cracks, hills, and uneven surfaces without the need for extensive sensor systems that would otherwise have helped it navigate its environment. Their results are published in a study this week in the journal Science.

The team has previously done work modeling the motion of these creepy critters. In this new study, they created a framework for operating this centipede-like robot that was influenced by mathematician Claude Shannon’s communication theory, which posits that in transmitting a signal between two points, that to avoid noise, it’s better to break up the message into discrete, repeating units. 

“We were inspired by this theory, and we tried to see if redundancy could be helpful in matter transportation,” Baxi Chong, a physics postdoctoral researcher, said in a news release. Their creation is a robot with joined parts like a model train with two legs sticking out from each segment that could allow it to “walk.” The notion is that after being told to go to a certain destination, along the way, these legs would make contact with a surface, and send information about the terrain to the other segments, which would then adjust motion and position accordingly. The team put it through a series of real-world and computer trials to see how it walked, how fast it could go, and how it performed on grass, blocks, and other rough surfaces. 

[Related: How a dumpy, short-legged bird could change water bottle designs]

“One value of our framework lies in its codification of the benefits of redundancy, which lead to locomotor robustness over environmental contact errors without requiring sensing,” the researchers wrote in the paper. “This contrasts with the prevailing paradigm of contact-error prevention in the conventional sensor-based closed-loop controls that take advantage of visual, tactile, or joint-torque information from the environment to change the robot dynamics.”

They repeated the experiment with robots that had different numbers of legs (six, 12, and 14). In future work with the robot, the researchers said that they want to hone in on finding the optimal number of legs to give its centipede-bot so that it can move smoothly in the most cost-effective way possible.

“With an advanced bipedal robot, many sensors are typically required to control it in real time,” Chong said. “But in applications such as search and rescue, exploring Mars, or even micro robots, there is a need to drive a robot with limited sensing.” 

The post To build a better crawly robot, add legs—lots of legs appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
An ambitious underwater ‘space station’ just got a major research collaborator https://www.popsci.com/technology/noaa-underwater-research-station-proteus/ Wed, 03 May 2023 19:00:00 +0000 https://www.popsci.com/?p=538695
A rendering of Proteus.
A rendering of Proteus. Concept designs by Yves Béhar and fuseproject

Fabien Cousteau's Proteus project will make a bigger splash this year.

The post An ambitious underwater ‘space station’ just got a major research collaborator appeared first on Popular Science.

]]>
A rendering of Proteus.
A rendering of Proteus. Concept designs by Yves Béhar and fuseproject

Today, the National Oceanic and Atmospheric Administration announced that it will be signing a new research agreement with Proteus Ocean Group, which has been drawing up ambitious plans to build a roomy underwater research facility that can host scientists for long stays while they study the marine environment up close. 

The facility, called Proteus, is the brainchild of Fabien Cousteau, the grandson of Jacques Cousteau.

“On PROTEUS™ we will have unbridled access to the ocean 24/7, making possible long-term studies with continuous human observation and experimentation,” Cousteau, founder of Proteus Ocean Group, said in a press release. “With NOAA’s collaboration, the discoveries we can make — in relation to climate refugia, super corals, life-saving drugs, micro environmental data tied to climate events and many others — will be truly groundbreaking. We look forward to sharing those stories with the world.”

This is by no means new territory for the government agency. NOAA has previously commandeered a similar reef base off the coast of Florida called Aquarius. But Aquarius is aging, and space there is relatively confined—accommodating up to six occupants in 400 sq ft. Proteus, the new project, aims to create a habitat around 2,000 sq ft for up to 12 occupants. 

This kind of habitat, the first of which is set to be located off the coast of Curacao in the Caribbean, is still on track to be operational by 2026, Lisa Marrocchino, CEO of Proteus Ocean Group, tells PopSci. A second location is set to be announced soon as well. “As far as the engineering process and partners, we’re just looking at that. We’ll be announcing those shortly. We’re evaluating a few different partners, given that it’s such a huge project.” 

[Related: Jacques Cousteau’s grandson is building a network of ocean floor research stations]

Filling gaps in ocean science is a key part of understanding its role in the climate change puzzle. Now that the collaborative research and development agreement is signed, the two organizations will soon be starting workshops on how to tackle future missions related to climate change, collecting ocean data, or even engineering input in building the underwater base. 

“Those will start progressing as we start working together,” Marrocchino says. “We’re just beginning the design process. It’s to the point where we are narrowing down the location. We’ve got one or two really great locations. Now we’re getting in there to see what can be built and what can’t be built.”

The NOAA partnership is only the beginning for Proteus. According to Marrocchino, Proteus Ocean Group has been chatting with other government agencies, and expects to announce more collaborations later this year. “The space community in particular is super excited about what we’re planning to do,” she says. “They really resonate with the idea that it’s very familiar to them in extreme environments, microgravity and pressure.”

Marrocchino also teased that there are ongoing negotiations with large multi-million dollar global brand partners, which will fund large portions of the innovative research set to happen at Proteus. “We’re seeing a trend towards big corporate brands coming towards the idea of a lab underwater,” she says. “You’ll see some partnership agreements geared towards advancing ocean science.” 

The post An ambitious underwater ‘space station’ just got a major research collaborator appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How a quantum computer tackles a surprisingly difficult airport problem https://www.popsci.com/technology/quantum-algorithm-flight-gate-assignment/ Mon, 01 May 2023 11:00:00 +0000 https://www.popsci.com/?p=537718
airplanes at different gates in an airport
Is there an optimal way to assign flights to gates?. Chris Leipelt / Unsplash

Here’s what a quantum algorithm for a real-world problem looks like.

The post How a quantum computer tackles a surprisingly difficult airport problem appeared first on Popular Science.

]]>
airplanes at different gates in an airport
Is there an optimal way to assign flights to gates?. Chris Leipelt / Unsplash

At first glance, quantum computers seem like machines that only will exist in the far-off future. And in a way, they are. Currently, the processing power that these devices have are limited by the number of qubits they contain, which are the quantum equivalent of the 0-or-1 bits you’ve probably heard about with classical computers. 

The engineers behind the most ambitious quantum projects have said that they can orchestrate together hundreds of qubits, but because these qubits have unique yet ephemeral quantum properties, like superposition and entanglement, keeping them in the ideal state is a tough task. All this added together means that the problems that researchers have touted that quantum computers could be better at than classical machines have still not been fully realized.

Scientists say that in general, quantum machines could be better at solving problems involving optimization operations, nature simulations, and searching through unstructured databases. But without real-world applications, it can all seem very abstract. 

The flight gate assignment challenge

A group of researchers working with IBM have been crafting and testing special algorithms for specific problems that work with quantum circuits. That means the broad category of optimization tasks becomes a more specific problem, like finding the best gate to put connecting flights in at an airport. 

[Related: In photos: Journey to the center of a quantum computer]

Here are the requirements for the problem: The computer needs to find the most optimal gates to assign incoming and connecting flights to in an airport in order to minimize the passenger travel time. In this sense, travel time to and from gates, number of passengers, the presence of a flight at a gate or not—these all become variables in a complex series of math equations. 

Essentially, each qubit represents either the gate or the flight. So the number of qubits you need to solve this problem is the number of gates multiplied by the number of flights, explains Karl Jansen, a research physicist at DESY (a particle physics research center in Germany) and an author of the preprint paper on the algorithm. 

How a ‘Hamiltonian’ is involved

In order to perform the operation on a quantum device, first they have to integrate all of this information into something called a “Hamiltonian,” which is a quantum mechanical function that measures the total energy of a system. The system, in this case, would be the connections in the airport. “If you find the minimal energy, then this corresponds to the optimal path through the airport for all passengers to find the optimal connections,” says Jansen. “This energy function, this Hamiltonian, is horribly complicated and scales exponentially. There is no way to do this on a classical computer. However, you can translate this Hamiltonian into a quantum circuit.”

[Related: IBM’s latest quantum chip breaks the elusive 100-qubit barrier]

In their study, Jansen and his colleagues only worked with around 20 qubits, which is not much and doesn’t offer an edge over the best classical algorithms directed at the problem today. At the moment it doesn’t really make sense to compare the solution time or accuracy to a classical calculation. “For this it would need 100 or 200 functioning qubits,” he notes. “What we want to know is if I make my problem size larger and larger, so I go to a larger and larger airport, is there at some point an advantage to using quantum mechanical principles to solve the problem.” 

Superposition, entanglement and interference

It’s important to note that controlling these machines means that the best minds across a wide number of industries, from applied math to chemistry to physics, must work together to design clever quantum algorithms, or instructions that tell the quantum computer what operations to perform and how. These algorithms are by nature different from classical algorithms. They can involve higher-level math, like linear algebra and matrices. “The fundamental descriptions of the systems are different,” says Jeannette Garcia, senior research manager for quantum applications and software team at IBM Research. “Namely, we have superposition and entanglement and this concept of interference.”

Although it is still to be proven, many researchers think that by using superposition, they can pack more information into the problem, and with entanglement, they could find more correlations, such as if a certain flight is correlated with another flight and another gate because they’re both domestic.

Every answer that a quantum computer gives is basically a probability, Garcia explains. A lot of work goes into formulating ways to combine answers together in a creative way to come up with the most probable answer over many repeating trials. That is what interference is—adding up or subtracting waveforms. The entanglement piece in particular is promising for chemistry but also for machine learning. “In machine learning datasets, you might have data that’s super correlated, so in other words, they are not independent from each other,” Garcia says. “This is what entanglement is. We can put that in and program that into what we’re studying as a way to conserve resources in the end, and computational power.” 

And while the new algorithm from Jansen’s team can’t really be used to make airports more efficient today, it can be translated to a variety of other problems. “Once we found really good ways of solving the flight gate assignment problem, we transferred the algorithms and improvements to these problems we are looking at for particle tracking, both at CERN but also at DESY,” Jansen said. 

In addition, you can apply the same formulation to other logistics problems such as optimizing bus routes or traffic light placements in a city. You just have to modify the information of the problem and what numbers you put into the coefficients and the binary variables. “For me, this was a good attempt to find a solution for the flight gate assignment problem,” Jansen says. “Now I’m looking at other instances where I can use this mathematical formulation to solve other problems.”

The post How a quantum computer tackles a surprisingly difficult airport problem appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Arctic researchers built a ‘Fish Disco’ to study ocean life in darkness https://www.popsci.com/technology/fish-disco-arctic-ocean/ Mon, 24 Apr 2023 11:00:00 +0000 https://www.popsci.com/?p=536004
northern lights over the Arctic ocean
Northern lights over the Arctic ocean. Oliver Bergeron / Unsplash

It's one of the many tools they use to measure artificial light’s impact on the Arctic ocean's sunless world.

The post Arctic researchers built a ‘Fish Disco’ to study ocean life in darkness appeared first on Popular Science.

]]>
northern lights over the Arctic ocean
Northern lights over the Arctic ocean. Oliver Bergeron / Unsplash

During the winter, the Arctic doesn’t see a sunrise for months on end. Although completely immersed in darkness, life in the ocean goes on. Diurnal animals like humans would be disoriented by the lack of daylight, having been accustomed to regular cycles of day and night. 

But to scientists’ surprise, it seems that even the photosynthetic plankton—microorganisms that normally derive their energy from sunlight—have found a way through the endless night. These marine critters power the region’s ecosystem, through the winter and into the spring bloom. Even without the sun, daily patterns of animals migrating from surface to the depths and back again (called the diel vertical migration) remain unchanged. 

However, scientists are concerned that artificial light could have a dramatic impact on this uniquely adapted ecosystem. The Arctic is warming fast, and the ice is getting thinner—that means there’s more ships, cruises, and coastal developments coming in, all of which can add light pollution to the underwater world. We know that artificial light is harmful to terrestrial animals and birds in flight. But its impact on ocean organisms is still poorly understood. 

A research team called Deep Impact is trying to close this knowledge gap, as reported in Nature earlier this month. Doing the work, though, is no easy feat. Mainly, there’s a bit of creativity involved in conducting experiments in the darkness—researchers need to understand what’s going on without changing the behaviors of the organisms. Any illumination, even from the research ship itself, can skew their observations. This means that the team has to make good use of a range of tools that allow them to “see” where the animals are and how they’re behaving, even without light. 

One such invention is a specially designed circular steel frame called a rosette, which contains a suite of optical and acoustic instruments. It is lowered into the water to survey how marine life is moving under the ship. During data collection, the ship will make one pass across an area of water without any light, followed by another pass with the deck lights on. 

[Related: Boaty McBoatface has been a very busy scientific explorer]

There are a range of different rosettes, made up of varying instrument compositions. One rosette called Frankenstein can measure light’s effect on where zooplankton and fish move to in the water column. Another, called Fish Disco, “emits sequences of multicolored flashes to measure how they affect the behavior of zooplankton,” according to Nature

And of course, robots that can operate autonomously can come in handy for occasions like these. Similar robotic systems have already been deployed on other aquatic missions like exploring the ‘Doomsday glacier,’ scouring for environmental DNA, and listening for whales. In absence of cameras, they can use acoustic-based tech, like echosounders (a sonar system) to detect objects in the water. 

In fact, without the element of sight, sound becomes a key tool for perceiving without seeing. It’s how most critters in the ocean communicate with one another. And making sense of the sound becomes an important problem to solve. To that end, a few scientists on the team are trying to see if machine learning can be used to identify what’s in the water through the pattern of the sound frequencies they reflect. So far, an algorithm currently being tested has been able to discern two species of cod.

The post Arctic researchers built a ‘Fish Disco’ to study ocean life in darkness appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Google is inviting citizen scientists to its underwater listening room https://www.popsci.com/technology/google-calling-our-corals/ Tue, 18 Apr 2023 19:00:00 +0000 https://www.popsci.com/?p=534974
a hydrophone underwater monitoring corals
A hydrophone underwater monitoring corals. Carmen del Prado / Google

You can help marine biologists identify fish, shrimp, and other noisy critters that are cracking along in these recordings of coral reefs.

The post Google is inviting citizen scientists to its underwater listening room appeared first on Popular Science.

]]>
a hydrophone underwater monitoring corals
A hydrophone underwater monitoring corals. Carmen del Prado / Google

In the water, sound is the primary form of communication for many marine organisms. A healthy ecosystem, like a lively coral reef, sounds like a loud symphony of snaps, crackles, pops, and croaks. These sounds can attract new inhabitants who hear its call from the open ocean. But in reefs that are degraded or overfished, it is more of a somber hum. That’s why monitoring how these habitats sound is becoming a key focus in marine research. 

To study this, scientists deploy underwater microphones, or hydrophones, for 24 hours at a time. Although these tools can pick up a lot of data, the hundreds of hours of recordings are tedious and difficult for a handful of researchers, or even labs, to go through. 

This week, tech giant Google announced that it was collaborating with marine biologists to launch an ocean listening project called “Calling our corals.” Anyone can listen to the recordings loaded on the platform—they come from sounds recorded by underwater microphones at 10 reefs around the world—and help scientists identify fish, crabs, shrimps, dolphins and human sounds like mining or boats. By crowdsourcing the annotation process for the audio clips, scientists could gather information more quickly on the biodiversity and activity at these reefs. 

[Related: Why ocean researchers want to create a global library of undersea sounds]

As part of the experience, you can also immerse in 360, surround sound views, of different underwater places as you read about the importance of coral to ocean life. Or, you can peruse through this interactive exhibit that takes you on a whirlwind tour. 

If you want to participate as a citizen scientist, click through to the platform, and it will take you through a training session where it teaches you to identify fish sounds. Then, you can practice until you feel solid about your skills. After that, you’ll be given 30-second-reef sound clips, and you will be asked to click every time you hear a fish noise, or spot where fish sounds versus shrimp sounds appear in a spectrogram (a spectrogram is a visual pattern representation capturing the amplitude, frequency, and duration of sounds). 

You can choose a location to begin your journey. The choices are Australia, Indonesia, French Polynesia, Maldives, Panama, Belgium’s North Sea, Florida Keys, Sweden, and The Guly in Canada. The whole process should take around 3 minutes. 

A more ambitious goal down the line for this project is to use all of the data gathered through this platform to train an AI model to listen to reefs and automatically identify the different species that are present. Having an AI model means that they can handle and ingest larger amounts of data, getting the researchers  up to date faster on the present conditions out in the ocean. 

The post Google is inviting citizen scientists to its underwater listening room appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Meet xenobots, tiny machines made out of living parts https://www.popsci.com/technology/xenobots/ Mon, 17 Apr 2023 11:00:00 +0000 https://www.popsci.com/?p=534352
A xenobot, or a living robot, in culture, under a microscope.
Xenobots can work together to gather particulate matter into a pile. Douglas Blackiston and Sam Kriegman

The starting ingredient for these bio-robots: frog cells.

The post Meet xenobots, tiny machines made out of living parts appeared first on Popular Science.

]]>
A xenobot, or a living robot, in culture, under a microscope.
Xenobots can work together to gather particulate matter into a pile. Douglas Blackiston and Sam Kriegman

You may or may not have heard of xenobots, a kind of Frankenfrog creation that involves researchers turning frog embryo cells into tiny bio-machines that can move around, push or carry objects, and work together. These ephemeral beings were first made by a team of scientists from Tufts University and the University of Vermont in 2020. 

The goal behind building these “bots” was to understand how cells communicate with one another. Here’s a breakdown of the hard facts behind how xenobots actually work, and what they are currently used for. 

What are xenobots?

A “living robot” can sound like a scary sci-fi term, but they are not anything like the sentient androids you may have seen on screen.

“At the most basic level, this is a platform or way to build with cells and tissues, the way we can build robots out of mechanical components,” says Douglas Blackiston, a senior scientist at Tufts University. “You can almost think of it as Legos, where you can combine different Legos together, and with the same set of blocks you can make a bunch of different things.” 

Biology photo
Xenobots are tiny. Here they are against a dollar bill for size. Douglas Blackiston and Sam Kriegman

But why would someone want to build robots out of living components instead of traditional materials, like metal and plastic? One advantage is that having a bio-robot of sorts means that it is biodegradable. In environmental applications, that means if the robot breaks, it won’t contaminate the environment with garbage like metal, batteries, or plastic. Researchers can also program xenobots to fall apart naturally at the end of their lives. 

How do you make a xenobot?

The building blocks for xenobots come from the eggs laid by the female African clawed frog, which goes by the scientific name Xenopus laevis

Just like with a traditional robot, they need other essential components: a power source, a motor or actuator for movement, and sensors. But with xenobots, all of these components are biological.

A xenobot’s energy comes from the yolk that’s a part of all amphibian eggs, which can power these machines for about two weeks with no added food. To get them to move, scientists can add biological “motors” like muscle or cardiac tissue. They can arrange the motors in different configurations to get the xenobots to move in certain directions or with a certain speed.  

“We use cardiac tissue because cardiac cells pulse at a regular rate, and that gives you sort of an inchworm type of movement if you build with it,” says Blackiston. “The other types of movement we get are from cilia. These are small hair-like structures that beat on the outside of different types of tissues. And this is a type of movement that dominates the microscopic world. If you take some pond water and look, most of what you see will move around with cilia.” 

Biology photo
Swimming xenobots with cilia covering their surface. Douglas Blackiston and Sam Kriegman

Scientists can also add components like optogenetic muscle tissues or chemical receptors to allow these biobots to respond to light or other stimuli in their environment. Depending on how the xenobots are programmed, they can autonomously navigate through their surroundings or researchers can add stimulus to “drive” them around. 

“There’s also a number of photosynthetic algae that have light sensors that directly hook onto the motors, and that allows them to swim towards sunlight,” says Blackiston. “There’s been a lot of work on the genetic level to modify these to respond to different types of chemicals or different types of light sources and then to tie them to specific motors.”

[Related: Inside the lab that’s growing mushroom computers]

Even in their primitive form, xenobots can still convey some type of memory, or relay information back to the researchers about where they went and what they did. “You can pretty easily hook activation of these different sensors into fluorescent molecules that either turn on or change color when they’re activated,” Blackiston explains. For example, when the bots swim through a blue light, they might change color from green to red permanently. As they move through mazes with blue lights in certain parts of it, they will glow different colors depending on the choices they’ve made in the maze. The researcher can walk away while the maze-solving is in progress, and still be in the know about how the xenobot navigated through it.  

They can also, for example, release a compound that changes the color of the water if they sense something.  

These sensors make the xenobot easy to manage. In theory, scientists can make a system in which the xenobots are drawn to a certain wavelength of light. They could then shine this at an area in the water to collect all of the bots. And the ones that slip through can still harmlessly break down at the end of their life. 

A xenobot simulator

Blackiston, along with collaborators at Northwestern and University of Vermont, are using an AI simulator they built to design different types of xenobots. “It looks sort of like Minecraft, and you can simulate cells in a physics environment and they will behave like cells in the real world,” he says. “The red ones are muscle cells, blue ones are skin cells, and green ones are other cells. You can give the computer a goal, like: ‘use 5,000 cells and build me a xenobot that will walk in a straight line or pick something up,’ and it will try hundreds of millions of combinations on a supercomputer and return to you blueprints that it thinks will be extremely performant.”

Most of the xenobots he’s created have come from blueprints that have been produced by this AI. He says this speeds up a process that would have taken him thousands of years otherwise. And it’s fairly accurate as well, although there is a bit of back and forth between playing with the simulator and modeling the real-world biology. 

Biology photo
Xenobots of different shapes crafted using computer-simulated blueprints. Douglas Blackiston and Sam Kriegman

The xenobots that Blackiston and his colleagues use are not genetically modified. “When we see the xenobots doing kinematic self-replication and making copies of themselves, we didn’t program that in. We didn’t have to design a circuit that tells the cells how to do kinematic self replication,” says Michael Levin, a professor of biology at Tufts. “We triggered something where they learned to do this, and we’re taking advantage of the native problem-solving capacity of cells by giving it the right stimuli.” 

What can xenobots help us do?

Xenobots are not just a blob of cells congealing together—they work like an ecosystem and can be used as tools to explore new spaces, in some cases literally, like searching for cadmium contamination in water. 

“We’re jamming together cells in configurations that aren’t natural. Sometimes it works, sometimes the cells don’t cooperate,” says Blackiston. “We’ve learned about a lot of interesting disease models.”

For example, with one model of xenobot, they’ve been able to examine how cilia in lung cells may work to push particles out of the airway or spread mucus correctly, and see that if the cilia don’t work as intended, defects can arise in the system.

The deeper application is using these biobots to understand collective intelligence, says Levin. That could be a groundbreaking discovery for the space of regenerative medicine. 

“For example, cells are not hardwired to do these specific things. They can adapt to changes and form different configurations,” he adds. “Once we figure out how cells decide together what structures they’re going to form, we can take advantages of those computations and build new organs, regenerate after injury, reprogram tumors—all of that comes from using these biobots as a way to understand how collective decision-making works.” 

The post Meet xenobots, tiny machines made out of living parts appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How a dumpy, short-legged bird could change water bottle designs https://www.popsci.com/technology/sandgrouse-feather-design/ Wed, 12 Apr 2023 19:00:00 +0000 https://www.popsci.com/?p=533674
Pin-tailed sandgrouse in a body of water
Pin-tailed sandgrouse hovering in a watering hole. DEPOSIT PHOTOS

The sandgrouse’s unique feathers can hold and transport fluids.

The post How a dumpy, short-legged bird could change water bottle designs appeared first on Popular Science.

]]>
Pin-tailed sandgrouse in a body of water
Pin-tailed sandgrouse hovering in a watering hole. DEPOSIT PHOTOS

The sandgrouse isn’t considered an elegant-looking bird. In fact, it is described by online database ebird as a “dumpy, short-legged, pigeon-like bird that shuffles awkwardly on the ground.” But it also harbors a special secret: It can carry water weighing around 15 percent of its body weight over short flights. Short in this case is up to 20 miles—enough to travel from a watering hole back to their nest. The key to this ability is the architecture of their belly feathers. 

In a new study published this week in the journal Royal Society Interface, researchers used high-tech microscopes and 3D technology to reveal the detailed design of these feathers, with hopes that it might inspire future engineers to create better water bottles, or sports hydration packs. According to a press release, the team says that they plan to “print similar structures [as the bird feathers] in 3D and pursue commercial applications.”

To observe how the feathers performed their water-toting magic, the researchers examined specimens they got from natural history museums and looked at dry feathers up close using light microscopy, scanning electron microscopy, and micro-CT. They then dunked them in water, and repeated the process. 

[Related: Cuttlefish have amazing eyes, so robot-makers are copying them]

What they saw was that the feathers themselves were made up of a mesh of barbs, tubes, grooves, hooks, and helical coils, with components that could bend, curl, cluster together, and more when wetted. In other words, they were optimized for holding and retaining water.

Engineering photo
Close up of sandgrouse feathers. Johns Hopkins University

“The microscopy techniques used in the new study allowed the dimensions of the different parts of the feather to be measured,” according to the press release. “In the inner zone, the barb shafts are large and stiff enough to provide a rigid base about which the other parts of the feather deform, and the barbules are small and flexible enough that surface tension is sufficient to bend the straight extensions into tear-like structures that hold water. And in the outer zone, the barb shafts and barbules are smaller still, allowing them to curl around the inner zone, further retaining water.”

Further, the team also used measurements of dimensions of the feather components to make computer models of them. They also used these numbers to calculate factors like surface tension force and the stiffness of the components that bend. 

“To see that level of detail…This is what we need to understand in order to use those principles to create new materials,” Jochen Mueller, an assistant professor in Johns Hopkins’ Department of Civil and Systems Engineering and an author on the paper, said in a press release

One use of this design would be to make water bottles that don’t allow the water to slosh around. Another possible application is to incorporate bits of this structure into netting in desert areas to capture and collect water from fog and dew. 

The post How a dumpy, short-legged bird could change water bottle designs appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Inside the search for the best way to save humanity’s data https://www.popsci.com/technology/long-term-data-storage/ Tue, 11 Apr 2023 13:00:00 +0000 https://www.popsci.com/?p=523451
Magnetic tape and Blu-Ray Discs for data storage forming an infinity sign on a yellow background. Illustrated.
Data storage systems fall out of fashion quickly. Christine Rösch

From Blu-ray Discs to magnetic tape, archivists are looking for a cheap storage medium that will last centuries.

The post Inside the search for the best way to save humanity’s data appeared first on Popular Science.

]]>
Magnetic tape and Blu-Ray Discs for data storage forming an infinity sign on a yellow background. Illustrated.
Data storage systems fall out of fashion quickly. Christine Rösch

INSIDE THE Library of Congress in Washington, D.C., there’s a living time capsule. The massive storage facility, run by the Motion Picture, Broadcasting, and Recorded Sound Division, is filled with wax cylinders, record players, and other pieces of dated audiovisual equipment. Some might see it as a junkyard of outdated technology, but Stephanie Barb likes to call this place the “land of lost toys.” 

“We used to play records all the time,” says Barb, the deputy director of IT service operations at the Library of Congress. Now, owning a record player is almost a whimsy.  

When machines become obsolete, the data they hold can be lost as well. Software and hardware fade out of general use as newer products and services replace them. It’s one of the several roadblocks technicians and archivists like Barb continuously run into in their quest to store information for long-term safekeeping. Right now, experts say there is no one storage device that can save data forever. Options like magnetic tape, Blu-ray Disc, and even DNA may provide stable but relatively temporary storage banks in which data can live while better technologies are tested and brought to market. However, each of these choices has its own shortcomings, and no one method is perfect in terms of both capacity and durability, with new innovations always on the horizon. 

The Library of Congress, for example, has a digital footprint of 176,000 terabytes, with its website catalogs of books, photos, videos, and other mediums taking up 5,350 terabytes alone (the equivalent of nearly 2 billion three-minute-long MP3s). Right now, this mountain of data is growing at around 1,500 terabytes a year. Archivists are racing against time to elongate the life of important documents and media. 

“Part of the preservation process is keeping operating systems and hardware up to date,” says Natalie Buda Smith, director of digital strategy at the Library of Congress. 

Nothing lasts forever

Preserving files in older mediums, like LP records and gaming consoles that have been discontinued, takes a bit of DIY tinkering. At the library, archivists rebuild vintage media players to recover data and transfer it to a more modern form of storage. Sometimes, the team even develops specialized technologies. For example, a system called IRENE, which the library codesigned with the Lawrence Berkeley National Laboratory, reads the depths of the grooves in broken phonograph records to convert the music to a digital format. 

shelves with lots of old-style recording equipment
Tape decks, record players, and other vintage data-reading tools fill the “land of lost toys.” Library of Congress

This is particularly important with the materials eligible for copyright, says Barb. Books can last forever if preserved properly, but items that are submitted for copyright on more corruptible materials, like DVDs, CDs, and DVRs, can degrade over time. “That puts us in a crunch to pull that data off those obsolete technologies and preserve it digitally, because we are going to lose what’s on there,” Barb explains. Since there’s a duplicate provided with every copyright submission, the Library of Congress typically adds it to the collections with the intention to update to a more modern method. 

Back up your work

When it comes to preserving data for the future, it’s important to keep the context in which the content exists. “Content says, ‘Here are the bits’; context says, ‘Here’s all the other stuff you need to understand those bits,’” notes Ethan Miller, director emeritus of the National Science Foundation’s Center for Research in Storage Systems. The extra context includes metadata, software, and hardware such as video game emulators. It’s the modern-day equivalent of a Rosetta Stone—a key that gives meaning to written languages and symbols of the past.

A lot of the data currently being collected is “born-digital content” rather than content that had to be digitized, Buda Smith says. Artifacts gathered from internet archiving are good examples. Even though the virtual-first information may ultimately end up on a physical medium like tape, it may live in a variety of other storage forms along the way. Saving multiple backups on different mediums is also good practice. 

Held together by tape

The library preserves the majority of its data on a decades-old medium that has so far stood the test of time: simple and affordable magnetic tape. The material is a Goldilocks medium prized for its density, data-writing speeds, and low cost. 

Even though tape storage has been around since the mid-1900s, it’s still constantly being improved upon to squeeze more and more bits of data onto each inch of tape. Companies like IBM are working to double capacity per cartridge (to a maximum of 45 terabytes) in newer generations while keeping the format relevant for the future. But tape is not foolproof. If the magnetic strip is damaged or overheated, the data can be wiped out. And while tape is faster to read from and write to than more novel mediums, the data it holds is not as easy to access or edit as information stored on flash drives or hard disk drives (HDDs). 

A driving force

The way you use data, and how often, will influence which storage mediums are the best fit. HDDs—the basis of cloud infrastructure—are a good starting solution for small companies with digital collections, says Shawn Brume, IBM’s storage strategist. Take movie studios, for example. 

“We are almost 25 years into [the filming of] the Star Wars prequels,” says Brume. “Disney has never moved the raw footage from filming those off of digital technology, and has stated that it will not.” That’s because keeping them on a hard drive makes cutting footage or inserting footage, whenever the filmmakers decide they want to make changes, much easier.

But HDD becomes more expensive with time and scale, Brume adds, making its use a pricey hassle with systems that continuously pump out large batches of data, like autonomous vehicles. The average driverless car system will generate upwards of 400 terabytes a year: If you have millions of cars all doing the same, then companies will easily get crushed by HDDs. Across the industry, the total cost to store a terabyte of data on HDD deep density storage (including infrastructure operations costs) ranges from approximately $0.70 to approximately $0.80 per month, according to Brume. For tape, it’s much less, at approximately $0.08 to $0.12 per month. So with this method, the information will eventually need to be migrated to tape for lower-cost, longer-term, and offline storage. “It’s a process of ingest, collate, coordinate, and copy out to tape,” Brume says. 

If you look at history, nothing has been the forever medium except for something that’s chiseled on the wall in a cave

Shawn Brume, IBM’s storage strategist

IBM advises companies on how to move their data from HDDs to long-term tape infrastructure if they will need to retrieve it in the future. But the drawback of tape, unlike hard drives, is that it is pretty hard to alter. You have to erase and rewrite everything even if you want to change just one detail. 

The race to make space

An often overlooked contender may soon pull ahead of tape and cloud storage in the eternal-storage race. Many experts agree that Blu-ray, or polycarbonate optical discs, shows immense promise, especially for preserving data for decades, and maybe centuries, in an untouched box. Named after the violet laser in the reader, this system has an edge over flash or hard drives, as the parts don’t wear out, Miller explains. 

It all comes down to basic mechanics. HDDs don’t read or write very well after being powered down for a spell. Similarly, flash drives have a limited lifetime. That’s because the electrons in the device’s transistors leak out with use, passing through barriers and altering the charge of the material over months and years. “That means you have to read the flash every so often and rewrite the data,” Miller says. 

That’s where Blu-ray can excel. According to Miller, the technology needed to scan the discs is relatively simple in its construction: It’s basically a motor that spins, a reader that goes in and out, and a low-power laser. Optical drives are even simpler than those used for magnetic tapes. A lower price point of $50 to $200 per drive also sweetens the deal.  

To Miller, the question of where to store data boils down to the question of what technologies will be available in 100 to 1,000 years to read it—whether from Blu-ray or more experimental forms of storage like glass and DNA.

“If you look at history, nothing has been the forever medium except for something that’s chiseled on the wall in a cave,” Brume says. But even that information corrodes. With every new invention for record-keeping—stone, paper, code—knowledge still had to be passed down and translated to the next place. “We’ve always had to manage data,” he adds. “There’s never been a forever instance of anything.”

Read more PopSci+ stories.

The post Inside the search for the best way to save humanity’s data appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
These glasses can pick up whispered commands https://www.popsci.com/technology/echospeech-glasses/ Sat, 08 Apr 2023 11:00:00 +0000 https://www.popsci.com/?p=532690
silent speech-recognizing glasses
They may look like ordinary glasses but they're not. Cornell University

It's like a tiny sonar system that you wear on your face.

The post These glasses can pick up whispered commands appeared first on Popular Science.

]]>
silent speech-recognizing glasses
They may look like ordinary glasses but they're not. Cornell University

These trendy-looking glasses from researchers at Cornell have a special ability—and it doesn’t have to do with nearsightedness. Embedded on the bottom of the frames are tiny speakers and microphones that can emit silent sound waves and receive echoes back. 

This ability comes in handy for detecting mouth movements, allowing the device to detect low-volume or even silent speech. That means you can whisper or mouth a command, and the glasses will pick it up like a lip reader. 

The engineers behind this contraption, called EchoSpeech, are set to present their paper on it at the Association for Computing Machinery Conference on Human Factors in Computing Systems in Germany this month. “For people who cannot vocalize sound, this silent speech technology could be an excellent input for a voice synthesizer,” Ruidong Zhang, a doctoral student at Cornell University and an author on the study, said in a press release. The tech could also be used by its wearers to give silent commands to a paired device, like a laptop or a smartphone. 

[Related: Your AirPods Pro can act as hearing aids in a pinch]

In a small study that had 12 people wearing the glasses, EchoSpeech proved that it could recognize 31 isolated commands and a string of connected digits issued by the subjects with error rates of less than 10 percent. 

Here’s how EchoSpeech works. The speakers and microphones are placed on different lenses on different sides of the face. When the speakers emit sound waves around 20 kilohertz (near ultrasound), it travels in a path from one lens to the lips and then to the opposite lens. As the sound waves from the speakers reflect and diffract after hitting the lips, their distinct patterns are captured by microphones and used to make “echo profiles” for each phrase or command. It effectively works like a simple, miniaturized sonar system

Through machine learning, these echo profiles can be used to infer speech, or the words that are spoken. While the model is pre-trained on select commands, it also goes through a fine-tuning phase for each individual that takes every new user around 6 to 7 minutes to complete. This is just to enhance and improve its performance.

[Related: A vocal amplification patch could help stroke patients and first responders]

The soundwave sensors are connected to a micro-controller with a customized audio amplifier that can communicate with a laptop through a USB cable. In a real-time demo, the team used a low-power version of EchoSpeech that could communicate wirelessly through Bluetooth with a micro-controller and a smartphone. The Android phone that the device connected to handled all processing and prediction and transmitted results to certain “action keys” that let it play music, interact with smart devices, or activate voice assistants.

“Because the data is processed locally on your smartphone instead of uploaded to the cloud, privacy-sensitive information never leaves your control,” François Guimbretière, a professor at Cornell University and an author on the paper, noted in a press release. Plus, audio data takes less bandwidth to transmit than videos or images, and takes less power to run as well. 

See EchoSpeech in action below: 

The post These glasses can pick up whispered commands appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Google Flights’ new feature will ‘guarantee’ the best price https://www.popsci.com/technology/google-flights-price-guarantee/ Tue, 04 Apr 2023 19:00:00 +0000 https://www.popsci.com/?p=525393
man standing in front of flights screen at airport
Google is testing a feature that will help users find the best price possible. Danila Hamsterman / Unsplash

Just in time for travel season.

The post Google Flights’ new feature will ‘guarantee’ the best price appeared first on Popular Science.

]]>
man standing in front of flights screen at airport
Google is testing a feature that will help users find the best price possible. Danila Hamsterman / Unsplash

Earlier this week, Google introduced a suite of new search features that will hopefully reduce some anxiety around travel planning. These tools, which promise to help make looking for places to stay and things to do more convenient, also include a “price guarantee” through the “Book with Google” option for flights departing from the US—which is Google’s way of saying that this deal is the best it’s going to get “before takeoff.” 

Already, Google offers ways to get data around historical prices for the flight they want to book. But many companies use revenue-maximizing AI algorithms to vary individual ticket pricing based on the capacity of the plane, demand, and competition with other airlines. This puts the onus on consumers to continuously monitor and research tickets in order to get the best deal. Specialty sites and hacks have popped up, offering loopholes around dynamic pricing (much to the dismay of major airlines).

Google’s pilot program for ticket pricing appears to offer another solution for consumers so they don’t have to constantly shop around for prices and come back day after day. To back it, Google says that if the price drops after booking, it will send you the difference back via Google Pay. 

[Related: The highlights and lowlights from the Google AI event]

The fine print (available via an online help document) specifies that the price difference must be greater than $5 and every user is limited to $500 per calendar year. And only users with a US billing address, phone number, and Google account can take advantage of this algorithm. Still, the fact that a person could receive back several hundred dollars after booking feels non-trivial. 

According to The Washington Post, “airlines have to partner with Google to participate in the Book on Google program — and to appear on Google Flights in the first place,” therefore it’s possible that users will still have to do some independent research for tickets offered by airlines outside of the partnerships. And since it’s only a pilot program, the feature in and of itself is subject to change. 

“For now, Alaska, Hawaiian and Spirit Airlines are the main Book on Google partners, so they are likely to have the most price-guaranteed itineraries during the pilot phase,” USA Today reported. “But Google representatives said they’re hoping to expand the program to more carriers soon.”

The Verge noted that price guarantees aren’t a new thing in the travel space. For example, “Priceline and Orbitz both promise partial refunds under certain circumstances, as do some individual airlines.” 

Interested in testing this out? Head on over to Google flights and look for the rainbow shield icon when browsing for tickets.

The post Google Flights’ new feature will ‘guarantee’ the best price appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Why researchers surveyed more than 1.1 billion objects across 73 museums https://www.popsci.com/technology/natural-history-museum-global-survey/ Sat, 01 Apr 2023 11:00:00 +0000 https://www.popsci.com/?p=524630
A peek into the vaults of the Smithsonian Institution's National Museum of Natural History.
A peek into the vaults of the Smithsonian Institution's National Museum of Natural History. Chip Clark / Smithsonian Institution

Is there a way to keep track of all the items held in natural history museums?

The post Why researchers surveyed more than 1.1 billion objects across 73 museums appeared first on Popular Science.

]]>
A peek into the vaults of the Smithsonian Institution's National Museum of Natural History.
A peek into the vaults of the Smithsonian Institution's National Museum of Natural History. Chip Clark / Smithsonian Institution

Natural history museums offer amazing portals into worlds miles away from our own, and into eras from the distant past. Comprised of fossils, minerals, preserved specimens, and much more, some collections are of palatial grandeur. Although every museum has some sort of system in place to track incoming and outgoing items, those systems are not connected, museum to museum. Keeping a more detailed record of who has what across the world could not only be important for conservation, but for cataloging how life on Earth has changed, and forecasting how it will continue to do so in the future. 

For example, there are case studies showing how analyzing the collections of these museums can be useful for studying pandemic preparedness, invasive species, colonial heritage, and more. 

But this lack of connection might be a thing of the past. A paper published in the journal Science last week describes how a dozen large museums came together to map the entire collections of 73 of the world’s largest natural history museums across 28 countries in order to figure out what digital infrastructure is needed to establish a global inventory survey. 

“There is no single shared portal covering the breadth of life, Earth, and anthropological specimens in natural history collections, nor a way for researchers to link these data with other sources of information,” the researchers wrote in the paper. “We envision a coordinated strategy for the global collection that is based on strategic collecting, increased digitization, new technologies, and enhanced networking and coordination of museums.” 

[Related: Why ocean researchers want to create a global library of undersea sounds]

Although recent tech advances in fields like isotopic identification, imaging, genomic analysis, and machine learning are making it easier to access information related to the collections, in order to have a shared portal of sorts, the team found that they needed to first work through some logistical kinks. 

Why researchers surveyed more than 1.1 billion objects across 73 museums
The exhibition “digitize!” offers a unique look at how the museum’s collections are imaged and digitized. Thomas Rosenthal / Museum für Naturkunde Berlin

“Until now, it has been difficult to enumerate or compare the complete contents of large museums because their collections are not fully digitized, and the terminology used to describe subcollections is variable,” they wrote. This, they think, is due to the fact that most museums operate independently, and do not have the data structure needed to provide open access to the outside.

“Most of the collection information that we surveyed is not digitally accessible: Only 16% of the objects have digitally discoverable records, and only 0.2% of biological collections have accessible genomic records,” they added.

Why researchers surveyed more than 1.1 billion objects across 73 museums
Scanning the barcode attached to the insect specimen links to a digital copy. Nico Garstman / Naturalis Biodiversity Center

Therefore, for the purpose of this survey, they came up with a common vocabulary for the types of objects, and the collections or geographic source areas they can be categorized into. They ran this methodology with all 73 museums, going through 1,147,934,687 specimens in total. You can see the breakdown in an online dashboard the team created. 

[Related: Open data is a blessing for science—but it comes with its own curses]

The survey found that while there was a vast diversity of items spanning areas of study from biology, geology, paleontology, anthropology, there were also “conspicuous gaps across museum collections in areas including tropic and polar regions, marine systems, and undiscovered arthropod and microbial diversity,” they noted. “These gaps could provide a roadmap for coordinated collecting efforts going forward.”

This is not to say that museums have been totally in the dark with their data. In fact, the survey organizers brought attention to several existing networks that have been established to integrate biodiversity data around the world, including the Global Biodiversity Information Facility, the Earth BioGenome Project, and the International Barcode of Life, just to name a few. 

Plus, they lauded programs like the Atlas of Living Australia and Integrated Digitized Biocollections for coming up with “innovative solutions to support collection digitization, data integration, and mobilization.” Having more readily available datasets makes it easier for others to work with them to look for patterns, or build tools and models that are helpful for the scientific community at-large.

The post Why researchers surveyed more than 1.1 billion objects across 73 museums appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Internet Archive just lost a federal lawsuit against big book publishers https://www.popsci.com/technology/internet-archive-loses-lawsuit/ Tue, 28 Mar 2023 21:30:00 +0000 https://www.popsci.com/?p=523708
books and statues in a library
Internet Archive's online library might have to change. Giammarco Boscaro / Unsplash

It plans to appeal the ruling.

The post Internet Archive just lost a federal lawsuit against big book publishers appeared first on Popular Science.

]]>
books and statues in a library
Internet Archive's online library might have to change. Giammarco Boscaro / Unsplash

Internet Archive, best known for a tool called the Wayback Machine, is also a massive non-profit digital library, free to everyone who creates an account with their email address. However, in 2020, it was sued by four corporate publishers over copyright issues, and in 2022, the non-profit organization asked a federal judge to put a stop to the lawsuit. Last week, the case came before US District Judge John Koeltl of New York, and on Friday, the federal judge ruled in favor of the publishers. 

At its heart, the dispute is around the way that the digital library lends books. Hachette Book Group, which comprises Hachette, HarperCollins, Penguin, and Wiley, alleged that 127 books under their copyright were scanned and loaned out electronically without their permission. Internet Archive argues that this practice is fair use. 

Typically, public and academic libraries acquire books for their patrons by either buying the physical copies, or paying for ebook licenses through so-called aggregators like OverDrive. Each publisher has a slightly different profit model when it comes to licensing. But in summary, all of them are fairly lucrative. “For example, library ebook licenses generate around $59 million per year for Penguin. Between 2015 and 2020, HarperCollins earned $46.91 million from the American library ebook market,” according to a court filing on the case. 

[Related: A copyright lawsuit threatens to kill free access to Internet Archive’s library of books]

Internet Archive mostly employed a practice called Controlled Digital Lending, where it first purchases a hard copy of a book, and scans it to make an ebook. Controlled Digital Lending works so that if the library owns one physical copy of a book, it can lend the digital version out to one user at a time, and if it owns four physical copies, then it can lend out four digital copies. Internet Archive uses software to ensure that users cannot copy or view the copies after the loan period. 

But, it temporarily suspended this policy during the COVID-19 lockdowns in order to implement a “National Emergency Library.” With the Emergency Library in place from March to June 2020, that policy was relaxed. As a result, many readers were allowed to borrow the same book simultaneously. And this appears to be the key issue that is swaying the judge to the publishers’ side, NPR reported. Reuters reported that Koeltl honed in on the question of “whether the library has the right to reproduce the book that it otherwise has the right to possess.”

Internet Archive doesn’t dispute that it copied the publishers’ works without permission. But its argument is that the doctrine of fair use “allows some unauthorized uses of copyrighted works,” granted that it aligns with the copyright law’s original purpose, which is to “promote the Progress of Science and useful Arts,” according to the court filing. Fair use exemptions in court can be complicated and are considered on a case-to-case basis, since many factors like the effect on the market, the transformation on the original, and the purpose of use need to be accounted for. And ultimately, the judge ruled that “each enumerated fair use factor favors the Publishers.”

Based on the ruling, Internet Archive can still distribute books in its collection that are public domain. “It also may use its scans of the Works in Suit, or other works in its collection, in a manner consistent with the uses deemed to be fair in Google Books and HathiTrust,” the filing stated. “What fair use does not allow, however, is the mass reproduction and distribution of complete copyrighted works in a way that does not transform those works and that creates directly competing substitutes for the originals.”

In a statement, Internet Archive said that it planned to appeal the judgment. “This decision impacts libraries across the US who rely on controlled digital lending to connect their patrons with books online,” Chris Freeland, the director of Open Libraries at Internet Archive, wrote in a blog post. “It hurts authors by saying that unfair licensing models are the only way their books can be read online.” 

Additionally, in its petition site’s FAQ, Internet Archive noted that the ruling has the potential not only to impact how libraries work, but also on preserving content against the threat of censorship. “Most digital books can only be licensed, meaning there is effectively only one copy of a digital book and it can be edited or deleted at any time with zero transparency,” the site stated. “In this scenario, profit-motivated big publishing shareholders for companies like Newscorp, Amazon, and Disney are in control of whether a book is censored or not.” 

The post Internet Archive just lost a federal lawsuit against big book publishers appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Tiny, fast lasers are unlocking the mysteries of photosynthesis https://www.popsci.com/technology/ultrafast-spectroscopy-photosynthesis/ Mon, 27 Mar 2023 10:00:00 +0000 https://www.popsci.com/?p=522857
plant leaf up close
How does photosynthesis really work? This tool might help us figure it out. Clay Banks / Unsplash

Seeing the process in fractions of a blink could provide insights for clean fuel and more climate-sturdy plants.

The post Tiny, fast lasers are unlocking the mysteries of photosynthesis appeared first on Popular Science.

]]>
plant leaf up close
How does photosynthesis really work? This tool might help us figure it out. Clay Banks / Unsplash

Renewable energy is easy for plants. These green organisms take water, sunlight and carbon dioxide and make their own fuel. The magic happens within teeny molecular structures too small for the human eye to perceive. 

But while this process is a breeze for plants, truly understanding what happens is surprisingly hard for humans. Scientists know that it involves electrons, charge transfers, and some atomic-level physics, but the specifics of what happens and when are a bit hazy. Efforts have been made to decipher this mystery utilizing a range of tools from nuclear magnetic resonance to quantum computers.

Enter an approach that shoots laser pulses at live plant cells to take images of them, study author Tomi Baikie, a fellow at the Cavendish Laboratory at Cambridge University, explained to Earther. Using this tech, Baikie and his colleagues delved into the reaction centers of plant cells. Their findings were published this week in the journal Nature

Engineering photo
An animation of the photosynthesis process. Mairi Eyres

The technique they used allowed the researchers to carefully watch what the electrons are doing, and “follow the flow of energy in the living photosynthetic cells on a femtosecond scale – a thousandth of a trillionth of a second,” according to a press release from University of Cambridge. 

Being able to have such a close eye on the electrons enabled the scientists to be able to make observations such as where the protein complex could leak electrons, and how charges move down the chain of chemical reactions. “We didn’t know as much about photosynthesis as we thought we did, and the new electron transfer pathway we found here is completely surprising,” Jenny Zhang, who coordinated the research, said in the statement.

[Related: The truth about carbon capture technology]

Knowing the intricacies behind how this natural process functions “opens new possibilities for re-wiring biological photosynthesis and creates a link between biological and artificial photosynthesis,” the authors wrote in the paper. That means they could one day use this knowledge to help reengineer plants to tolerate more sun, or create new formulas for cleaner, light-based fuel for people to use. 

Although the possibilities of “hacking” photosynthesis is more speculative, the team is excited about the potential of ultrafast spectroscopy itself, seeing how it can provide “rich information” on the “dynamics of living systems.” As PopSci previously reported, “using ultrashort pulses for spectroscopy allows scientists to peer into the depths of molecules and atoms, or into processes that start and finish in tiny fractions of a blink.”

The post Tiny, fast lasers are unlocking the mysteries of photosynthesis appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Engineers created a paper plane-throwing bot to learn more about flight https://www.popsci.com/technology/paper-airplane-robot-epfl/ Sat, 18 Mar 2023 11:00:00 +0000 https://www.popsci.com/?p=520729
paper airplanes
Paper airplane designs are being put formally to the test. Matt Ridley / Unsplash

The bot made and launched more than 500 planes with dozens of designs. Here’s what happened.

The post Engineers created a paper plane-throwing bot to learn more about flight appeared first on Popular Science.

]]>
paper airplanes
Paper airplane designs are being put formally to the test. Matt Ridley / Unsplash

How you fold a paper airplane can determine how fast or how far it goes. A lot of people arrive at the best designs through trial, error, and perhaps a little bit of serendipity. The paper plane can be modeled after the structure of a real aircraft, or something like a dart. But this question is no child’s play for engineers at the Swiss Federal Institute of Technology Lausanne (EPFL). 

A new paper out in Scientific Reports this week proposes a rigorous, technical approach for testing how the folding geometry can impact the trajectory and behavior of these fine flying objects. 

“Outwardly a simple ‘toy,’ they show complex aerodynamic behaviors which are most often overlooked,” the authors write. “When launched, there are resulting complex physical interactions between the deformable paper structure and the surrounding fluid [the air] leading to a particular flight behavior.”

To dissect the relationship between a folding pattern and flight, the team developed a robotic system that can fabricate, test, analyze, and model the flight behavior of paper planes. This robot paper plane designer (really a robot arm fashioned with silicone grippers) can run through this whole process without human feedback. 

Engineering photo
A video of the robot at work. Obayashi et. al, Scientific Reports

[Related: How to make the world’s best paper airplane]

In this experiment, the bot arm made and launched over 500 paper airplanes with 50 different designs. Then it used footage from a camera that recorded the flights to obtain stats on how far each design flew and the characteristics of that flight. 

Engineering photo
Flying behaviors with paths mapped. Obayashi et. al, Scientific Reports

During the study, while the paper planes did not always fly the same, the researchers found that different shapes could be sorted into three broad types of “behavioral groups.” Some designs follow a nose dive path, which as you imagine, means a short flight distance before plunging to the ground. Others did a glide, where it descends at a consistent and relatively controlled rate, and covers a longer distance than the nose dive. The third type is a recovery glide, where the paper creation descends steadily before leveling off and staying at a certain height above the ground.

“Exploiting the precise and automated nature of the robotic setup, large scale experiments can be performed to enable design optimization,” the researchers noted. “The robot designer we propose can advance our understanding and exploration of design problems that may be highly probabilistic, and could otherwise be challenging to observe any trends.”

When they say that the problem is probabilistic, they are referring to the fact that every design iteration can vary in flight across different launches. In other words, just because you fold a paper plane the same way each time doesn’t guarantee that it’s going to fly the exact way. This insight can also apply to the changeable flight paths of small flying vehicles. “Developing these models can be used to accelerate real-world robotic optimization of a design—to identify wing shapes that fly a given distance,” they wrote. 

The post Engineers created a paper plane-throwing bot to learn more about flight appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Google’s AI doctor appears to be getting better https://www.popsci.com/technology/google-health-ai-doctor-update/ Thu, 16 Mar 2023 22:00:00 +0000 https://www.popsci.com/?p=520348
Dr. Alan Karthikesalingam presenting at the Google health event.
Dr. Alan Karthikesalingam presenting at the Google health event. Google / YouTube

It's all part of the company's grand mission to make personalized health info more accessible.

The post Google’s AI doctor appears to be getting better appeared first on Popular Science.

]]>
Dr. Alan Karthikesalingam presenting at the Google health event.
Dr. Alan Karthikesalingam presenting at the Google health event. Google / YouTube

Google believes that mobile and digital-first experiences will be the future of health, and it has stats to back it up—namely the millions of questions asked in search queries, and the billions of views on health-related videos across its video streaming platform, YouTube. 

The tech giant has nonetheless had a bumpy journey in its pursuit to turn information into useful tools and services. Google Health, the official unit that the company formed in 2018 to tackle this issue, dissolved in 2021. Still, the mission lived on in bits across YouTube, Fitbit, Health AI, Cloud, and other teams. 

Google is not the first tech company to dream big when it comes to solving difficult problems in healthcare. IBM, for example, is interested in using quantum computing to get at topics like optimizing drugs targeted to specific proteins, improving predictive models for cardiovascular risk after surgery, and cross-searching genome sequences and large drug-target databases to find compounds that could help with conditions like Alzheimer’s.

[Related: Google Glass is finally shattered]

In Google’s third annual health event on Tuesday, called “The Check Up,” company executives provided updates about a range of health projects that they have been working on internally, and with partners. From a more accurate AI clinician, to added vitals features on Fitbit and Android, here are some of the key announcements. 

AI photo
A demo of how Google’s AI can be used to guide pregnancy ultrasound. Charlotte Hu

For Google, previous research at the intersection of AI and medicine have covered areas such as breast cancer detection, skin condition diagnoses, and the genomic determinants of health. Now, it’s expanding its AI models to include more applications, such as cancer treatment planning, finding colon cancer from images of tissues, and identification of health conditions on ultrasound. 

[Related: Google is launching major updates to how it serves health info]

Even more ambitiously, instead of using AI for a specific healthcare task, researchers at Google have also been experimenting with using a generative AI model, called Med-PaLM, to answer commonly asked medical questions. Med-PaLM is based on a large language model Google developed in-house called PaLM. In a preprint paper published earlier this year, the model scored 67.6 percent on a benchmark test containing questions from the US Medical License Exam. 

At the event, Alan Karthikesalingam, a senior research scientist at Google, announced that with the second iteration of the model, Med-PaLM 2, the team has bumped its accuracy on medical licensing questions to 85.4 percent. Compared to the accuracy of human physicians, sometimes Med-PaLM is not as comprehensive, according to clinician reviews, but is generally accurate, he said. “We’re still learning.” 

AI photo
An example of Med-PaLM’s evaluation. Charlotte Hu

In a language model realm, although it’s not the buzzy new Bard, a conversational AI called Duplex is being employed to verify whether providers accept federal insurance like Medicaid, boosting a key search feature Google first unveiled in December 2021. 

[Related: This AI is no doctor, but its medical diagnoses are pretty spot on]

On the consumer hardware side, Google devices like Fitbit, Pixel, and Nest will now be able to provide users with an extended set of metrics regarding their heart rate, breathing, skin temperature, sleep, stress, and more. For Fitbit, the sensors are more evident. But the cameras on Pixel phones, as well as the motion and sound detectors on Nest devices, can also give personal insights on well-being. Coming to Fitbit’s sleep profile feature is a new metric called stability, which tells users when they’re waking up in the night by analyzing their movement and heart rate. Google also plans to make a lot more of its health metrics, like respiration, which uses a camera and non-AI algorithms to detect movement and track pixels, and heart rate, which relies on an algorithm that measures and changes in skin color, available to users with compatible devices without a subscription. 

AI photo
Users can take their pulse by placing their fingertip over the back cameras of their Pixel phones. Charlotte Hu

This kind of personalization around health will hopefully allow users to get feedback on long-term patterns and events that may deviate from their normal baseline. Google is testing new features too, like an opt-in function for identifying who coughed, in addition to counting and recording coughs (both of which are already live) on Pixel. Although it’s still in the research phase, engineers at the company say that this feature can register the tone and timber of the cough as vocal fingerprints for different individuals. 

Watch the full keynote below:

The post Google’s AI doctor appears to be getting better appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Spotify wants to understand your body on music https://www.popsci.com/technology/spotify-study-biometric/ Tue, 28 Feb 2023 12:00:00 +0000 https://www.popsci.com/?p=515874
the spotify app on a phone screen
Gery Wibowo / Unsplash

It teamed up with biometrics research company MindProber to study its users.

The post Spotify wants to understand your body on music appeared first on Popular Science.

]]>
the spotify app on a phone screen
Gery Wibowo / Unsplash

Think about the music and podcasts you listen to, and how they affect your mood. If streaming audio content makes you happier, you’re not alone, and the proof is in the study data Spotify released today.

That finding comes courtesy of 426 free-tier Spotify users who volunteered to wear an electrodermal activity sensor on their palm any time they tuned in over the course of 40 days. The company learned that listening to either podcasts or music improved its users’ moods, and that the activities listeners participate in influenced the type of content they gravitated to.

Although part of Spotify’s motivation for this research is to help advertisers understand how users’ engagement habits with music and podcasts can be used to create a seamless ad experience, it also has interesting implications for scientific studies related to the human experience with sound. 

“The project is showing that you can actually study this stuff in the wild. The conditions here were as realistic as you can get considering these were people that were just living their lives,” study co-author Josh McDermott, associate professor in MIT’s Department of Brain and Cognitive Sciences, tells PopSci

In his view, this opens the door to a new kind of anthropology-like study that can look at how people deal with audio in their lives. “There’s this big cultural shift in the way that we consume music and other audio that really happened over the last decade. It’s just changed the way that people live and probably has a lot of important effects,” McDermott adds. “This is just one way to understand that.” 

Key to Spotify’s work was the electrodermal activity sensor, which measures sweat and variations in the electroconductivity of the skin.

“The reason why we went with that specific technology is that we really wanted measure the impact of digital technologies throughout the day, so it was crucial for us to get outside of a lab environment and let our research participants use and interact with Spotify as they would normally do,” says Marion Boeri, global lead of Thought Leadership Research at Spotify.

This research follows a 2021 collaboration with a company called Neuro-Insight that measured users’ brain activity while they listened to Spotify. In the Neuro-Insight project, “we had people come in the lab, and we measured their neural activity when they’re listening to Spotify, which obviously helped us understand engagement that our platform drove, but it was something that was limiting us to that environment,” Boeri adds. 

In an effort to break free of that limitation, this time Spotify enlisted research participants from the US and the UK who had free Spotify accounts. Across 14,878 Spotify listening sessions, the company tracked what these people listened to, and asked them to fill out surveys before and after each session about the activity they were doing, their mood, if they remembered the ads they had heard, and if they were interested in the product advertised. Spotify’s researchers took a baseline measure of electrodermal activity before people listened to any audio, and used changes they observed as a metric for engagement. 

[Related: Meet Spotify’s new AI DJ]

No matter the audio content, streaming boosted mood across the board. 

“You do see that people report their mood improves regardless of what they do. You see this boost in every activity [we measured],” McDermott says. “People are choosing what they consume and it makes them a little bit happier.”

There were also findings that broadly proved some long-suspected trends in audio science, like the fact that our environments dictate the types of audio content we gravitate toward in the moment.

“The musical attributes and the audio attributes that characterize what people are listening to vary a lot depending on what they’re doing,” he notes. For example, people like dancey music if they’re in a social setting, or if they’re being active. And they might like podcasts or wordier songs when they’re on a walk by themselves. “This is the kind of thing people have suspected intuitively for a long time, but it’s never been demonstrated,” McDermott says. “This was really the first time anybody had access to that.”

The post Spotify wants to understand your body on music appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Inside the lab that’s growing mushroom computers https://www.popsci.com/technology/unconventional-computing-lab-mushroom/ Mon, 27 Feb 2023 20:00:00 +0000 https://www.popsci.com/?p=515615
electrodes hooked up to mushrooms
Recording electrical activity of split gill fungi Schizophyllum commune. Irina Petrova Adamatzky

The lead researcher says he is “planning to make a brain from mushrooms.”

The post Inside the lab that’s growing mushroom computers appeared first on Popular Science.

]]>
electrodes hooked up to mushrooms
Recording electrical activity of split gill fungi Schizophyllum commune. Irina Petrova Adamatzky

Upon first glance, the Unconventional Computing Laboratory looks like a regular workspace, with computers and scientific instruments lining its clean, smooth countertops. But if you look closely, the anomalies start appearing. A series of videos shared with PopSci show the weird quirks of this research: On top of the cluttered desks, there are large plastic containers with electrodes sticking out of a foam-like substance, and a massive motherboard with tiny oyster mushrooms growing on top of it. 

No, this lab isn’t trying to recreate scenes from “The Last of Us.” The researchers there have been working on stuff like this for awhile: It was founded in 2001 with the belief that the computers of the coming century will be made of chemical or living systems, or wetware, that are going to work in harmony with hardware and software.

Why? Integrating these complex dynamics and system architectures into computing infrastructure could in theory allow information to be processed and analyzed in new ways. And it’s definitely an idea that has gained ground recently, as seen through experimental biology-based algorithms and prototypes of microbe sensors and kombucha circuit boards.

In other words, they’re trying to see if mushrooms can carry out computing and sensing functions.

Engineering photo
A mushroom motherboard. Andrew Adamatzky

With fungal computers, mycelium—the branching, web-like root structure of the fungus—acts as conductors as well as the electronic components of a computer. (Remember, mushrooms are only the fruiting body of the fungus.) They can receive and send electric signals, as well as retain memory. 

“I mix mycelium cultures with hemp or with wood shavings, and then place it in closed plastic boxes and allow the mycelium to colonize the substrate, so everything then looks white,” says Andrew Adamatzky, director of the Unconventional Computing Laboratory at the University of the West of England in Bristol, UK. “Then we insert electrodes and record the electrical activity of the mycelium. So, through the stimulation, it becomes electrical activity, and then we get the response.” He notes that this is the UK’s only wet lab—one where chemical, liquid, or biological matter is present—in any department of computer science.

Engineering photo
Preparing to record dynamics of electrical resistance of hemp shaving colonized by oyster fungi. Andrew Adamatzky

The classical computers today see problems as binaries: the ones and zeros that represent the traditional approach these devices use. However, most dynamics in the real world cannot always be captured through that system. This is the reason why researchers are working on technologies like quantum computers (which could better simulate molecules) and living brain cell-based chips (which could better mimic neural networks), because they can represent and process information in different ways, utilizing a series of complex, multi-dimensional functions, and provide more precise calculations for certain problems. 

Already, scientists know that mushrooms stay connected with the environment and the organisms around them using a kind of “internet” communication. You may have heard this referred to as the wood wide web. By deciphering the language fungi use to send signals through this biological network, scientists might be able to not only get insights about the state of underground ecosystems, and also tap into them to improve our own information systems. 

Cordyceps fungi
An illustration of the fruit bodies of Cordyceps fungi. Irina Petrova Adamatzky

Mushroom computers could offer some benefits over conventional computers. Although they can’t ever match the speeds of today’s modern machines, they could be more fault tolerant (they can self-regenerate), reconfigurable (they naturally grow and evolve), and consume very little energy.

Before stumbling upon mushrooms, Adamatzky worked on slime mold computers—yes, that involves using slime mold to carry out computing problems—from 2006 to 2016. Physarum, as slime molds are called scientifically, is an amoeba-like creature that spreads its mass amorphously across space. 

Slime molds are “intelligent,” which means that they can figure out their way around problems, like finding the shortest path through a maze without programmers giving them exact instructions or parameters about what to do. Yet, they can be controlled as well through different types of stimuli, and be used to simulate logic gates, which are the basic building blocks for circuits and electronics.

[Related: What Pong-playing brain cells can teach us about better medicine and AI]

Engineering photo
Recording electrical potential spikes of hemp shaving colonized by oyster fungi. Andrew Adamatzky

Much of the work with slime molds was done on what are known as “Steiner tree” or “spanning tree” problems that are important in network design, and are solved by using pathfinding optimization algorithms. “With slime mold, we imitated pathways and roads. We even published a book on bio-evaluation of the road transport networks,” says Adamatzky “Also, we solved many problems with computation geometry. We also used slime molds to control robots.” 

When he had wrapped up his slime mold projects, Adamatzky wondered if anything interesting would happen if they started working with mushrooms, an organism that’s both similar to, and wildly different from, Physarum. “We found actually that mushrooms produce action potential-like spikes. The same spikes as neurons produce,” he says. “We’re the first lab to report about spiking activity of fungi measured by microelectrodes, and the first to develop fungal computing and fungal electronics.”  

Engineering photo
An example of how spiking activity can be used to make gates. Andrew Adamatzky

In the brain, neurons use spiking activities and patterns to communicate signals, and this property has been mimicked to make artificial neural networks. Mycelium does something similar. That means researchers can use the presence or absence of a spike as their zero or one, and code the different timing and spacing of the spikes that are detected to correlate to the various gates seen in computer programming language (or, and, etc). Further, if you stimulate mycelium at two separate points, then conductivity between them increases, and they communicate faster, and more reliably, allowing memory to be established. This is like how brain cells form habits.

Mycelium with different geometries can compute different logical functions, and they can map these circuits based on the electrical responses they receive from it. “If you send electrons, they will spike,” says Adamatzky. “It’s possible to implement neuromorphic circuits… We can say I’m planning to make a brain from mushrooms.” 

Engineering photo
Hemp shavings in the shaping of a brain, injected with chemicals. Andrew Adamatzky

So far, they’ve worked with oyster fungi (Pleurotus djamor), ghost fungi (Omphalotus nidiformis), bracket fungi (Ganoderma resinaceum), Enoki fungi (Flammulina velutipes), split gill fungi (Schizophyllum commune) and caterpillar fungi (Cordyceps militari).

“Right now it’s just feasibility studies. We’re just demonstrating that it’s possible to implement computation, and it’s possible to implement basic logical circuits and basic electronic circuits with mycelium,” Adamatzky says. “In the future, we can grow more advanced mycelium computers and control devices.” 

The post Inside the lab that’s growing mushroom computers appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Meet Spotify’s new AI DJ https://www.popsci.com/technology/spotify-ai-dj/ Wed, 22 Feb 2023 16:30:00 +0000 https://www.popsci.com/?p=514256
spotify's ai dj feature in their app
Take a look at Spotify's AI DJ. Spotify

Here’s how it was made—and how it will affect your listening experience.

The post Meet Spotify’s new AI DJ appeared first on Popular Science.

]]>
spotify's ai dj feature in their app
Take a look at Spotify's AI DJ. Spotify

Spotify, the popular audio streaming app, is on a journey to make music-listening personal for its users. Part of that includes recommending playlists “For You” like “Discover Weekly,” and summing up your year in audio with “Wrapped.” Today, the company announced that they are introducing an AI DJ, that folds much of their past work on audio recommendation algorithms into a new feature. It’s currently rolling out for premium users in the US and Canada. 

The artificially intelligent DJ blends recommendations from across different personal and general playlists throughout the app. The goal is to create an experience where it picks the vibe that it thinks you will like, whether that’s new hits, your old favorites, or songs that you’ve had on repeat for weeks. And like a radio DJ, it will actually talk—giving some commentary, and introducing the song, before it queues it up. Users can skip songs, or even ask the DJ to change the vibe by clicking on the icon at the bottom right of the screen. It will refresh the lineup based on your interactions. 

The DJ is comprised of three main tech components: Spotify’s personalization technology, a generative AI from OpenAI that scripts up what cultural context the DJ provides (alongside human writers), and an AI text-to-voice platform that was built through their acquisition of Sonantic, and based on the real life model, Xavier “X” Jernigan, Spotify’s head of cultural partnerships. To train the AI DJ, Jernigan spent a long time in the studio recording speech samples. 

[Related: How Spotify trained an AI to transcribe music]

The text-to-speech system accounts for all the nuances in human speech, such as pitch, pacing, emphasis, and emotions. For example, if a sentence ended in a semicolon instead of a period, the voice inflection would be different.

There’s a weekly writer’s room that curates what they’re going to say about songs in the flagship playlists, such as those grouped by genres. There’s another group of writers and cultural experts that come in to discuss how they want to phrase the commentary around the songs they serve up to users. The generative AI then comes in and scales this base script, and tailors it to all the individual users. AI DJ is technically still in beta, and Spotify engineers are eager to take user feedback to add improvements in future versions.

[Related: The best Spotify add-ons and tricks]

Think of the individual “For You” recommendations as Lego pieces, Ziad Sultan, head of personalization at Spotify, tells PopSci. “It’s not that it takes all the playlists and merges them. It’s more, in order to build this playlist in the first place, we have had to build a lot of Lego pieces that understand the music, understand the user, and understand how to create the right combo,” he explains. “A lot of that is from the years of developing machine learning, getting the data, especially the playlist data, for example.” 

“Eighty-one percent of people say that the thing they love most about Spotify is the personalization,” says Sultan. “So they’re still going to have the things they know and love. But this is just a new choice, which is also about not choosing.”  

Try it for yourself in the “Music” feed of the app homepage, or see it in action below:

Update February 22, 2023: This article has been updated to clarify that “X” is the voice model that was used and not the name for Spotify’s AI DJ.

The post Meet Spotify’s new AI DJ appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The ability for cities to survive depends on smart, sustainable architecture https://www.popsci.com/technology/moma-nyc-architecture-exhibit/ Tue, 21 Feb 2023 20:00:00 +0000 https://www.popsci.com/?p=513938
An architecture mockup of the Hunter's Point South Park.
An architecture mockup of the Hunter's Point South Park. Charlotte Hu

Creation and destruction is ongoing in NYC. These promising projects could be models for the future of construction.

The post The ability for cities to survive depends on smart, sustainable architecture appeared first on Popular Science.

]]>
An architecture mockup of the Hunter's Point South Park.
An architecture mockup of the Hunter's Point South Park. Charlotte Hu

All around New York City, building projects seem to be constantly in the works. But in an era in which climate resiliency and a growing population are key factors that architects must consider, the approach to construction and its related waste may require some more creative thinking. 

A new exhibit at the Museum of Modern Art in Manhattan Architecture Now: New York, New Publics shines a spotlight on ideas at the cutting edge of innovation that aim to reimagine the relationship between the city’s architecture, its people, and the nature around it. Here’s a peek at some of the projects that have been selected for display. 

“All of the projects we highlight are what we see as models for future construction in New York or in the world,” says Martino Stierli, chief curator of architecture and design at the Museum of Modern Art. “This exhibition is kind of an archipelago, where each of the projects is an island and you can roam around freely.”

Working with nature 

New York has seen a renewed focus on achieving cleaner energy in the next few decades. That includes decarbonizing buildings and transportation wherever possible. For example, a project at Jones Beach Energy and Nature Center in Long Island is putting this vision into practice. Converted from a former bathhouse, the new facility, which opened in September 2020, is net-zero—meaning that it generates all the energy it needs through renewables—and is designed to have a small footprint. It also has a climate resilient landscape. 

It features solar panels atop the building, geothermal wells that heat its insides, and restored beachscape with local native plants that filter stormwater and help secure sediments against erosion. There is a battery on site that stores extra electricity produced by the solar panels that can supply power through nights and stormy weather. “The building is interesting by itself. But you have to see it as a larger environmental system,” Stierli says. 

[Related: This startup plans to collect carbon pollution from buildings before it’s emitted]

On the front of climate resiliency, another project, Hunter’s Point South Waterfront Park, has taken into account how rising seas should influence the design of coastal structures. In one way or another, engineers across New York have been thinking of ways to fight the water, or keep it off. 

“This park is designed so part of it can flood. The coastline becomes much more like what it would’ve been naturally, so the water goes back and forth… As you know, New York before civilization was basically a swamp,” says Stierli. “So instead of building high walls to keep the water out, you have these artificial flood plains, and of course that creates a new, but ancient again, biosphere for plants and animals who have always lived in this presence of saltwater.”

Architects from WEISS/MANFREDI tailored the design to the specific ecological conditions and geography of the land. The second phase of the park opened in 2018 next to the East River, which is tidal, narrow, and prone to wave action. Because of this, they developed a landscaped, walkable fortified edge that protects the emerging wetlands from harsh wave action. In an extreme flooding event, the height of the wall is calibrated to gently let water in, allowing the wetlands to act like sponges to absorb flood water. After a storm, water is then slowly released in a safe and controlled way, Marion Weiss and Michael Manfredi, two architects from the firm, explain in an email. This design was tested through computer and analogue models that factored in the specific features of the East River. And the park held its own even against the real and unexpected test of Hurricane Sandy. 

When the team conducted research into the site history, they found that a marsh shoreline existed in the past along a wider and gentler tidal estuary, Tom Balsley, principal designer at SWA, said in an email. To reinstate the marsh in the present day, they worked in collaboration with civil, marine, and marsh ecologist consultants to create a balanced habitat.

Making use of trash 

Waste is another theme that echoes throughout the exhibit, especially when it comes to addressing the building industry’s relationship with trash and its effective use of supplies. “The construction and the architecture industry really has to come in terms with the fact that our resources are finite,” Stireli says. “This idea of reusing, recycling, is probably the most important aspect of contemporary thinking in architecture.” 

The TestBeds research project, for example, imagines how life-size prototypes for future buildings can be given a second life as greenhouses, or other community structures, instead of heading to the landfill. To illustrate how different bits and pieces of buildings and developments move across the city and find new homes, MoMA created an accompanying board game to help visitors understand the rules and processes behind new construction projects. They can scan a QR code to play it.  

“This is the only project I know that actually deals with architectural mockups, because no one ever asks about them. Often, they just get left behind. They get destroyed,” he adds. “And so here we have this idea to say, these are actually valid building components and you can just integrate them into designs. These are five propositions, one of them is actually built, which is this community garden shed here.” 

On the topic of waste, Freshkills Park—formerly the Fresh Kills Landfill—is in the spotlight for its unique ambitions. “This is the site of the largest dump in the US,” Stierli says. “Former field operations who are very important landscape architects based in New York, they have been working with the city for the last 20 years or so to renaturate and to make it a park that is accessible and create a place for leisure and outdoor activities. Of course a lot of it has to do with the management of toxic waste.”

Part of this involved putting the Landfill Gas System in place that “collects and controls gas emission through a network of wells connected by pipes below the surface that convey the gas through a vacuum,” according to the park’s official website. “Once collected, the gas is processed to pipeline quality (recovery for domestic energy use) at an on–site LFG recovery plant.” There is also a leachate management system to remove and treat pollutants that are made when the waste breaks down. And of course, landfill engineers made sure to put many layers, liners, and caps between the waste and the new park soil. 

Some parts of Freshkills Park are now open to visitors. However, Stireli notes that “this is a work in progress.” 

New York, New Publics will be on display at The Museum of Modern Art in Manhattan, New York through July 29, 2023. 

The post The ability for cities to survive depends on smart, sustainable architecture appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A torpedo-like robot named Icefin is giving us the full tour of the ‘Doomsday’ glacier https://www.popsci.com/technology/icefin-robot-thwaites-glacier/ Fri, 17 Feb 2023 15:00:00 +0000 https://www.popsci.com/?p=513275
The Icefin robot under the sea ice.
Icefin under the sea ice. Rob Robbins, USAP Diver

It may look like a long, narrow tube, but this robot is useful for a range of scientific tasks.

The post A torpedo-like robot named Icefin is giving us the full tour of the ‘Doomsday’ glacier appeared first on Popular Science.

]]>
The Icefin robot under the sea ice.
Icefin under the sea ice. Rob Robbins, USAP Diver

Thwaites, a notoriously unstable glacier in western Antarctica, is cracking and disintegrating, spelling bad news for sea level rise across the globe. Efforts are afoot to understand the geometry and chemistry of Thwaites, which is about the size of Florida, in order to gauge the impact that warming waters and climate change may have on it. 

An 11-foot tube-like underwater robot called Icefin is offering us a detailed look deep under the ice at how the vulnerable ice shelf in Antarctica is melting. By way of two papers published this week in the journal Nature, Icefin has been providing pertinent details regarding the conditions beneath the freezing waters. 

The torpedo-like Icefin was first developed at Georgia Tech, and the first prototype of the robot dates back to 2014. But it has since found a new home at Cornell University. This robot is capable of characterizing below-ice environments using the suite of sensors that it carries. It comes equipped with HD cameras, laser ranging systems, sonar, doppler current profilers, single beam altimeters (to measure distance), and instruments for measuring salinity, temperature, dissolved oxygen, pH, and organic matter. Its range is impressive: It can go down to depths of 3,280 feet and squeeze through narrow cavities in the ice shelf. 

Since Icefin is modular, it can be broken down, customized, and reassembled according to the needs of the mission. Researchers can remotely control Icefin’s trajectory, or let it set off on its own.  

Icefin isn’t alone in these cold waters. Its journey is part of the International Thwaites Glacier Collaboration (ITGC), which includes other radars, sensors, and vehicles like Boaty McBoatface

[Related: The ‘Doomsday’ glacier is fracturing and changing. AI can help us understand how.]

In 2020, through a nearly 2,000-foot-deep borehole drilled in the ice, Icefin ventured out across the ocean to the critical point where the Thwaites Glacier joins the Amundsen Sea and the ice starts to float. Data gathered by Icefin, and analyzed by human researchers, showed that the glacier had retreated up the ocean floor, thinning at the base, and melting outwards quickly. Additionally, the shapes of certain crevasses in the ice are helping funnel in warm ocean currents, making sections of the glacier melt faster than previously expected. 

These new insights, as foreboding as they are, may improve older models that have been used to predict the changes in Thwaites, and in the rates of possible sea level rise if it collapses. 

“Icefin is collecting data as close to the ice as possible in locations no other tool can currently reach,” Peter Washam, a research scientist from Cornell University who led analysis of Icefin data used to calculate melt rates, said in a press release. “It’s showing us that this system is very complex and requires a rethinking of how the ocean is melting the ice, especially in a location like Thwaites.”

Outside of Thwaites, you can find Icefin monitoring the ecosystems within ice-oceans around Antarctica’s McMurdo research station, or helping astrobiologists understand how life came to be in ocean worlds and their biospheres. 

Learn more about Icefin below: 

The post A torpedo-like robot named Icefin is giving us the full tour of the ‘Doomsday’ glacier appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Cuttlefish have amazing eyes, so robot-makers are copying them https://www.popsci.com/technology/cuttlefish-eye-imaging-system/ Wed, 15 Feb 2023 20:00:00 +0000 https://www.popsci.com/?p=512718
a cuttlefish in darkness
Cuttlefish are clever critters with cool eyes. Will Turner / Unsplash

Cameras inspired by cuttlefish eyes could help robots and cars see better.

The post Cuttlefish have amazing eyes, so robot-makers are copying them appeared first on Popular Science.

]]>
a cuttlefish in darkness
Cuttlefish are clever critters with cool eyes. Will Turner / Unsplash

Cuttlefish are smart, crafty critters that have long fascinated scientists. They’re masters of disguise, creative problem solvers, and they wear their feelings on their skin. On top of all that, they have cool-looking eyes and incredible sight. With w-shaped pupils, a curved retina, and a special arrangement of cells that respond to light, they have stellar 3D vision, great perception of contrast, and an acute sensitivity to polarized light. This vision system allows these creatures to hunt in underwater environments where lighting is often uneven or less than optimal. And for an international team of roboticists wanting to create machines that can see and navigate in these same conditions, they’re looking to nature for inspiration on artificial vision. 

In a new study published this week in Science Robotics, the team created an artificial vision design that was inspired by cuttlefish eyes. It could help the robots, self-driving vehicles, and drones of the future see the world better. 

“Aquatic and amphibious animals have evolved to have eyes optimized for their habitats, and these have inspired various artificial vision systems,” the researchers wrote in the paper. For example, imaging systems have been modeled after the fish eyes with a panoramic view, the wide-spectrum vision of mantis shrimp, and the 360-field-of-view of fiddler crab eyes. 

[Related: A tuna robot reveals the art of gliding gracefully through water]

Because the cuttlefish has photoreceptors (nerve cells that take light and turn it into electrical signals) that are packed together in a belt-like region and stacked in a certain configuration, it’s good at recognizing approaching objects. This feature also allows them to filter out polarized light reflecting from the objects of interest in order to obtain a high visual contrast. 

Meanwhile, the imaging system the team made mimics the unique structural and functional features of the cuttlefish eye. It contains a w-shaped pupil that is attached on the outside of a ball-shaped lens with an aperture sandwiched in the middle. The pupil shape is intended to reduce distracting lights not in the field of vision and balance brightness levels. This device also contains a flexible polarizer on the surface, and a cylindrical silicon photodiode array that can convert photons into electrical currents. These kinds of image sensors usually pair one photodiode to one pixel. 

“By integrating these optical and electronic components, we developed an artificial vision system that can balance the uneven light distribution while achieving high contrast and acuity,” the researchers wrote. 

In a small series of imaging tests, the cuttlefish-inspired camera was able to pick up the details on a photo better than a regular camera, and it was able to fairly accurately translate the outlines of complex objects like a fish even when the light on it was harsh or shone at an angle. 

The team notes that this approach is promising for reducing blind spots that most modern cameras on cars and bots have trouble with, though they acknowledge that some of the materials used in their prototype may be difficult to fabricate on an industrial level. Plus, they note that “there is still room for further improvements in tracking objects out of sight by introducing mechanical movement systems such as biological eye movements.”

The post Cuttlefish have amazing eyes, so robot-makers are copying them appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How neutral atoms could help power next-gen quantum computers https://www.popsci.com/technology/neutral-atom-quantum-computer/ Fri, 10 Feb 2023 20:00:00 +0000 https://www.popsci.com/?p=511415
A prototype of QuEra's neutral atom quantum computer.
A prototype of QuEra's neutral atom quantum computer. QuEra

These atoms would function as individual qubits and would be controlled by "optical tweezers."

The post How neutral atoms could help power next-gen quantum computers appeared first on Popular Science.

]]>
A prototype of QuEra's neutral atom quantum computer.
A prototype of QuEra's neutral atom quantum computer. QuEra

There’s a twist in the race to build a practical quantum computer. Several companies are betting that there’s a better way to create the basic unit of a quantum computer, the qubit, than the leading method being pursued by giants like IBM and Google. 

To back up a moment to the fundamental design of quantum computers, think of qubits as the quantum equivalent of the binary bits contained in classical computers. But instead of storing either on-or-off states like bits (the famous 1 or 0), qubits store waveforms, which allows them to have a value of 1, 0, or a combination of the two. To exhibit these quantum properties, objects have to be either very small or very cold. In theory, this quality allows qubits to perform more complex calculations compared to bits. But in reality, the unique state that qubits attain is hard to maintain, and once the state is lost, so is the information carried in these qubits. So, how long qubits can stay in this quantum state currently sets the limit on the calculations that can be performed. 

A frontrunner in the race to build a useful quantum computer is IBM, and its approach to these fundamental computing units is a contraption called superconducting qubits. This technique involves engineering small pieces of superconducting metals and insulators to create a material that behaves like an artificial atom in an ultra-cold environment (a more in-depth explanation is available here). 

[Related: IBM’s latest quantum chip breaks the elusive 100-qubit barrier]

But for emerging companies like QuEra, Atom Computing, Pasqal, they’re looking to try something new, and build a quantum computer using neutral atoms, which has long been seen as a promising platform. A neutral atom is an atom that contains a balanced amount of positive and negative charges. 

Previously, this approach has largely been tested by small companies and university laboratories, but that might soon start to change. Working with qubits made from neutral atoms may in some ways be easier than fabricating an artificial atom, experts told PopSci in 2021. 

Lasers in the prototype of Atom Computing's neutral atom quantum computer.
Lasers in the prototype of Atom Computing’s neutral atom quantum computer. Atom Computing

QuEra, for example, uses rubidium atoms as qubits. Rubidium appears on the periodic table as one of the alkaline metals, with the atomic number 37. To get the atom to carry quantum information, researchers will shine a laser at it to excite it to different energy levels. Two of these levels can be isolated, and labeled as the 0 and 1 values for the qubit. In their excited states, atoms can interact with other atoms close by. Lasers also act as “optical tweezers” for individual atoms, holding them in place and reducing their movement, which cools them down and makes them easier to work with. The company says that it can pack thousands of laser-trapped atoms in a square millimeter in flexible configurations. QuEra claims that they have at times achieved coherence times of more than 1 second (coherence time is how long the qubits retain their quantum properties). For comparison, the average coherence time for IBM’s quantum chips is around 300 microseconds. 

“To assemble multiple qubits, physicists split a single laser beam into many, for example by passing it through a screen made of liquid crystals. This can create arrays of hundreds of tweezers, each trapping their own atom,” reported Nature. “One major advantage of the technique is that physicists can combine multiple types of tweezers, some of which can move around quickly — with the atoms they carry…This makes the technique more flexible than other platforms such as superconductors, in which each qubit can interact only with its direct neighbors on the chip.”

Already, peer-reviewed papers have been published, testing the possibilities of running a quantum algorithm on such a technology. A paper published in January in the journal Nature Physics even characterized the behavior of a neutral atom trapped in an optical tweezer.  

Currently, QuEra is able to work with around 256 qubits and is offered as part of Amazon Web Services’ quantum computing service. According to an Amazon blog post, these neutral atom-based processors are suitable for “arranging atoms in graph patterns, and solving certain combinatorial optimization problems.”

Meanwhile, Atom Computing, which bases its qubits off the alkaline earth metal Strontium, uses a vacuum chamber, magnetic fields, and lasers to create its array. Its prototype has caught the eyes of the Pentagon’s DARPA research division, and it recently received funding as part of the agency’s Underexplored Systems for Utility-Scale Quantum Computing (US2QC) program. 

Pasqal, another Paris-based quantum computing start-up, has also rallied quite a bit of capital behind this up-and-coming approach. Specifically, according to TechCrunch, it raised around €100M in late January to build out its neutral atom quantum computer.

The post How neutral atoms could help power next-gen quantum computers appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The highlights and lowlights from the Google AI event https://www.popsci.com/technology/google-ai-in-paris/ Wed, 08 Feb 2023 20:00:00 +0000 https://www.popsci.com/?p=510821
Google SVP Prabhakar Raghavan at the AI event in Paris
Google SVP Prabhakar Raghavan at the AI event in Paris. Google / YouTube

Google Maps, Search, Translate, and more are getting an AI update.

The post The highlights and lowlights from the Google AI event appeared first on Popular Science.

]]>
Google SVP Prabhakar Raghavan at the AI event in Paris
Google SVP Prabhakar Raghavan at the AI event in Paris. Google / YouTube

Google search turns 25 this year, and although its birthday isn’t here yet, today executives at the company announced that the search function is getting some much anticipated AI-enhanced updates. Outside of search, Google is also expanding its AI capabilities to new and improved features across its translation service, maps, and its work with arts and culture. 

After announcing on Monday that it was launching its own version of a ChatGPT-like AI chatbot called Bard, Prabhakar Raghavan, senior vice president at Google, introduced it live at a Google AI event that was streamed Wednesday from Paris, France. 

Raghavan highlighted how Google-pioneered research in transformers (that’s a neural network architecture used in language models and machine learning) set the stage for much of the generative AI we see today. He noted that while pure fact-based queries are the bread and butter of Google search as we know it today, questions in which there is “no one right answer” could be served better by generative AI, which can help users organize complex information and multiple viewpoints. 

Their new conversational AI, Bard, which is built from a smaller model of a language tool they developed in 2021 called LaMDA, is meant to, for example, help users weigh the pros and cons of different car models if they were looking into buying a vehicle. Bard is currently with a small group of testers, and will be scaling to more users soon. 

[Related: Google’s own upcoming AI chatbot draws from the power of its search engine]

However, the debut didn’t go as smoothly as the company planned. Multiple publications noticed that in a social media post the Google shared on the new AI search feature, Bard gave the wrong information in response to a demo question. Specifically, when prompted with the query: “what new discoveries from the James Webb Space Telescope can I tell my 9 year old about,” Bard responded with “JWST took the very first pictures of a planet outside of our own solar system,” which is inaccurate. According to Reuters and NASA, the first pictures of a planet outside of our solar system were taken by the European Southern Observatory’s Very Large Telescope (VLT) in 2004.

This stumble is bad timing given the hype yesterday around Microsoft’s announcement that it was integrating ChatGPT’s AI into the company’s Edge browser and its search engine, Bing. 

Despite Bard’s bumpy breakout, Google did go on to make many announcements about AI-enhanced features trickling into its other core services. 

[Related: Google’s about to get better at understanding complex questions]

In Lens, an app based on Google’s image-recognition tech, the company is bringing a “search your screen” feature to Android users in the coming months. This will allow users to click on a video or image from their messages, web browser, and other apps, and ask the Google Assistant to find more information about items or landmarks that may appear in the visual. For example, if a friend sends a video of her trip in Paris, Google Assistant can search the screen of the video, and identify the landmark that is present in it, like the Luxembourg Palace. It’s part of Google’s larger effort to mix different modalities, like visual, audio, and text, into search in order to help it tackle more complex queries

In the maps arena, a feature called immersive view, which Google teased last year at the 2022 I/O conference, is starting to roll out today. Immersive view uses a method called neural radiance fields to generate a 3D scene from 2D images. It can even recreate subtle details like lighting, and the texture of objects. 

[Related: Google I/O recap: All the cool AI-powered projects in the works]

Outside of the immersive view feature, Google is also bringing search with live view to maps that allows users to scope out their surroundings using their phone camera to scan the streets around them, and get instant augmented reality-based information on shops and businesses nearby. It’s currently available in London, Los Angeles, New York, Paris, San Francisco and Tokyo but will be expanding soon to Barcelona, Dublin and Madrid. For EV drivers, AI will be used to suggest charging stops and plan routes that factor in things like traffic, energy consumption, and more. Users can expect these improvements to trickle into data-based projects Google has been running such as Environmental Insights Explorer and Project Air View

To end on a fun note, Google showcased some of the work it’s been doing in using AI to design tools across arts and culture initiatives. As some might remember from the last few years, Google has used AI to locate you and your pet’s doppelgängers in historic art. In addition to solving research challenges like helping communities preserve their language word lists, digitally restoring paintings and other cultural artifacts, and uncovering the historic contributions of women in science, AI is being used in more amusing applications as well. For example, the Blob Opera was built from an algorithm trained on the voices of real opera singers. The neural network then puts its own interpretation on how to sing and harmonize based on its model of human singing. 

Watch the entire presentation below: 

Update on Feb 13, 2023: This post has been updated to clarify that Bard gave incorrect information in a social media post, not during the live event itself. This post has also been updated to remove a sentence referring to the delay between when the livestream concluded and when Google published the video of the event.

The post The highlights and lowlights from the Google AI event appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A simple guide to the expansive world of artificial intelligence https://www.popsci.com/technology/artificial-intelligence-definition/ Sun, 05 Feb 2023 17:00:00 +0000 https://www.popsci.com/?p=509522
A white robotic hand moving a black pawn as the opening move of a chess game played atop a dark wooden table.
Here's what to know about artificial intelligence. VitalikRadko / Depositphotos

AI is everywhere, but it can be hard to define.

The post A simple guide to the expansive world of artificial intelligence appeared first on Popular Science.

]]>
A white robotic hand moving a black pawn as the opening move of a chess game played atop a dark wooden table.
Here's what to know about artificial intelligence. VitalikRadko / Depositphotos

When you challenge a computer to play a chess game, interact with a smart assistant, type a question into ChatGPT, or create artwork on DALL-E, you’re interacting with a program that computer scientists would classify as artificial intelligence. 

But defining artificial intelligence can get complicated, especially when other terms like “robotics” and “machine learning” get thrown into the mix. To help you understand how these different fields and terms are related to one another, we’ve put together a quick guide. 

What is a good artificial intelligence definition?

Artificial intelligence is a field of study, much like chemistry or physics, that kicked off in 1956. 

“Artificial intelligence is about the science and engineering of making machines with human-like characteristics in how they see the world, how they move, how they play games, even how they learn,” says Daniela Rus, director of the computer science and artificial intelligence laboratory (CSAIL) at MIT. “Artificial intelligence is made up of many subcomponents, and there are all kinds of algorithms that solve various problems in artificial intelligence.” 

People tend to conflate artificial intelligence with robotics and machine learning, but these are separate, related fields, each with a distinct focus. Generally, you will see machine learning classified under the umbrella of artificial intelligence, but that’s not always true.

“Artificial intelligence is about decision-making for machines. Robotics is about putting computing in motion. And machine learning is about using data to make predictions about what might happen in the future or what the system ought to do,” Rus adds. “AI is a broad field. It’s about making decisions. You can make decisions using learning, or you can make decisions using models.”

AI generators, like ChatGPT and DALL-E, are machine learning programs, but the field of AI covers a lot more than just machine learning, and machine learning is not fully contained in AI. “Machine learning is a subfield of AI. It kind of straddles statistics and the broader field of artificial intelligence,” says Rus.

Complicating the playing field is that non-machine learning algorithms can be used to solve problems in AI. For example, a computer can play the game Tic-Tac-Toe with a non-machine learning algorithm called minimax optimization. “It’s a straight algorithm. You build a decision tree and you start navigating. There is no learning, there is no data in this algorithm,” says Rus. But it’s still a form of AI.

Back in 1997, the Deep Blue algorithm that IBM used to beat Gary Kasparov was AI, but not machine learning, since it didn’t use gameplay data. “The reasoning of the program was handcrafted,” says Rus. “Whereas AlphaGo [a new chess-playing program] used machine learning to craft its rules and its decisions for how to move.”

When robots have to move around in the world, they have to make sense of their surroundings. This is where AI comes in: They have to see where obstacles are, and figure out a plan to go from point A to point B. 

“There are ways in which robots use models like Newtonian mechanics, for instance, to figure how to move, to figure how to not fall, to figure out how to grab an object without dropping it,” says Rus. “If the robot has to plan a path from point A to point B, the robot can look at the geometry of the space and then it can figure out how to draw a line that is not going to bump into any obstacles and follow that line.” That’s an example of a computer making decisions that is not using machine learning, because it is not data-driven.

[Related: How a new AI mastered the tricky game of Stratego]

Or take, for example, teaching a robot to drive a car. In a machine learning-based solution for teaching a robot how to do that task, for instance, the robot could watch how humans steer or go around the bend. It will learn to turn the wheel either a little or a lot based on how shallow the bend is. For comparison, in the non-machine learning solution for learning to drive, the robot would simply look at the geometry of the road, consider the dynamics of the car, and use that to calculate the angle to apply on the wheel to keep the car on the road without veering off. Both are examples of artificial intelligence at work, though.

“In the model-based case, you look at the geometry, you think about the physics, and you compute what the actuation ought to be. In the data-driven [machine learning] case, you look at what the human did, and you remember that, and in the future when you encounter similar situations, you can do what the human did,” Rus says. “But both of these are solutions that get robots to make decisions and move in the world.” 

Can you tell me more about how machine learning works?

“When you do data-driven machine learning that people equate with AI, the situation is very different,” Rus says. “Machine learning uses data in order to figure out the weights and the parameters of a huge network, called the artificial neural network.” 

Machine learning, as its name implies, is the idea of software learning from data, as opposed to software just following rules written by humans. 

“Most machine learning algorithms are at some level just calculating a bunch of statistics,” says Rayid Ghani, professor in the machine learning department at Carnegie Mellon University. Before machine learning, if you wanted a computer to detect an object, you would have to describe it in tedious detail. For example, if you wanted computer vision to identify a stop sign, you’d have to write code that describes the color, shape, and specific features on the face of the sign. 

“What people figured is that it would be exhaustive for people describing it. The main change that happened in machine learning is [that] what people were better at was giving examples of things,” Ghani says. “The code people were writing was not to describe a stop sign, it was to distinguish things in category A versus category B [a stop sign versus a yield sign, for example]. And then the computer figured out the distinctions, which was more efficient.”

Should we worry about artificial intelligence surpassing human intelligence?

The short answer, right now: Nope. 

Today, AI is very narrow in its abilities and is able to do specific things. “AI designed to play very specific games or recognize certain things can only do that. It can’t do something else really well,” says Ghani. “So you have to develop a new system for every task.” 

In one sense, Rus says that research under AI is used to develop tools, but not ones that you can unleash autonomously in the world. ChatGPT, she notes, is impressive, but it’s not always right. “They are the kind of tools that bring insights and suggestions and ideas for people to act on,” she says. “And these insights, suggestions and ideas are not the ultimate answer.” 

Plus, Ghani says that while these systems “seem to be intelligent,” all they’re really doing is looking at patterns. “They’ve just been coded to put things together that have happened together in the past, and put them together in new ways.” A computer will not on its own learn that falling over is bad. It needs to receive feedback from a human programmer telling it that it’s bad. 

[Related: Why artificial intelligence is everywhere now]

And also, machine learning algorithms can be lazy. For example, imagine giving a system images of men, women, and non-binary individuals, and telling it to distinguish between the three. It’s going to find patterns that are different, but not necessarily ones that are meaningful or important. If all the men are wearing one color of clothing, or all the photos of women were taken against the same color backdrop, the colors are going to be the characteristics that these systems pick up on. 

“It’s not intelligent, it’s basically saying ‘you asked me to distinguish between three sets. The laziest way to distinguish was this characteristic,’” Ghani says. Additionally, some systems are “designed to give the majority answer from the internet for a lot of these things. That’s not what we want in the world, to take the majority answer that’s usually racist and sexist.” 

In his view, there still needs to be a lot of work put into customizing the algorithms for specific use cases, making it understandable to humans how the model reaches certain outputs based on the inputs it’s been given, and working to ensure that the input data is fair and accurate. 

What’s the next decade hold for AI?

Computer algorithms are good at taking large amounts of information and synthesizing it, whereas people are good at looking through a few things at a time. Because of this, computers tend to be, understandably, much better at going through a billion documents and figuring out facts or patterns that recur. But humans are able to go into one document, pick up small details, and reason through them. 

“I think one of the things that is overhyped is the autonomy of AI operating by itself in uncontrolled environments where humans are also found,” Ghani says. In very controlled settings—like figuring out the price to charge for food products within a certain range based on an end goal of optimizing profits—AI works really well. However, cooperation with humans remains important, and in the next decades, he predicts that the field will see a lot of advances in systems that are designed to be collaborative. 

Drug discovery research is a good example, he says. Humans are still doing much of the work with lab testing and the computer is simply using machine learning to help them prioritize which experiments to do and which interactions to look at.

“[AI algorithms] can do really extraordinary things much faster than we can. But the way to think about it is that they’re tools that are supposed to augment and enhance how we operate,” says Rus. “And like any other tools, these solutions are not inherently good or bad. They are what we choose to do with them.”

The post A simple guide to the expansive world of artificial intelligence appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
No one can predict exactly where birds go, but this mathematical model gets close https://www.popsci.com/environment/machine-learning-bird-migration/ Wed, 01 Feb 2023 20:00:00 +0000 https://www.popsci.com/?p=509185
migrating birds above body of water
Bird migration season is coming up. Barth Bailey / Unsplash

When in doubt, go with the BirdFlow.

The post No one can predict exactly where birds go, but this mathematical model gets close appeared first on Popular Science.

]]>
migrating birds above body of water
Bird migration season is coming up. Barth Bailey / Unsplash

Migratory birds will soon be on the move. Starting in March, spring migration will be underway across North America as songbirds, shorebirds, waterfowls, and birds of prey return back to town. Although scientists know that these critters will soon be taking flight, they’ve long been trying to pin down what routes the birds will take to get back to the states from the tropics they’ve been overwintering in. 

A study out this week in the journal Methods in Ecology and Evolution from a team at UMass Amherst and Cornell University describes a new strategy to predict the flight paths of these migratory birds using computer modeling and sighting data from citizen science platform eBird. And according to the researchers, the forecasting abilities of BirdFlow, as its called, are pretty accurate. 

“It’s incredibly difficult to get precise, real-time information on which birds are where, let alone where, exactly, they are going,” Miguel Fuentes, the paper’s lead author and graduate student in computer science at UMass Amherst, said in a press release

[Related: These new interactive maps reveal the incredible global journeys of migrating birds]

For example, scientists know that birds like American woodcocks will migrate each year from Texas and the Carolinas to the southern reaches of Canada. But they could take a number of routes to and from the two destinations.

Tracking tags can help provide some partial clues, but it’s hard to tag every bird of interest. And weather radar is good for visualizing bird movements in real-time, but can’t really give much information about which species are in the flock, or how individual birds are behaving. Moon-watching robots, on the other hand, are good for observing individual behaviors, but rely too heavily on cosmic timing. Besides, from year to year, birds may alter their routes. And like all wild animals, they’re inherently unpredictable.

To get a more accurate, live read on migratory birds, BirdFlow, the probability-estimating machine-learning model the team developed, uses information about weekly bird sightings and population distribution data from eBird for training. It was fine-turned with up-to-date GPS and satellite tracking data in order to predict where certain birds are headed next in their journey. 

“BirdFlow models can be trained on any species, even those not tracked by eBird, as long as relative abundance models are available,” the researchers wrote in the paper. The model was tested on 11 species of North American birds like the American woodcock, wood thrush and Swainson’s hawk, and it outperformed other migration prediction models. Plus, it can correctly predict a bird’s flight path even without real-time GPS or tracking data. 

According to the press release, the team will utilize a $827,000 grant from the National Science Foundation to further improve BirdFlow and prepare a software package for ecologists, which they expect to release later this year. The researchers are also working on a more visual interface based on these models to engage the general public with.

“In addition to the ecological questions investigated in our case study, samples from BirdFlow models can be used to study other phenomena such as stopover behavior and responses to global change,” the authors wrote. “Finally, BirdFlow can raise public awareness about biodiversity and ecosystem health by providing a tool for outreach to engage scientists, bird-watchers, policymakers and the general public.”

The post No one can predict exactly where birds go, but this mathematical model gets close appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A new artificial skin could be more sensitive than the real thing https://www.popsci.com/technology/artificial-skin-iontronic/ Fri, 27 Jan 2023 15:00:00 +0000 https://www.popsci.com/?p=508099
two hands
Could artificial skin be the next frontier in electronics?. Shoeib Abolhassani / Unsplash

It can detect direct pressure as well as objects that are hovering close by.

The post A new artificial skin could be more sensitive than the real thing appeared first on Popular Science.

]]>
two hands
Could artificial skin be the next frontier in electronics?. Shoeib Abolhassani / Unsplash

The human skin is the body’s largest organ. It also provides one of our most important senses: touch. Touch enables people to interact with and perceive objects in the external world. In building robots and virtual environments, though, touch has not been the easiest feature to translate compared to say, vision. Many labs are nonetheless trying to make touch happen, and various versions of artificial skin show promise in making electronics (like the ones powering prosthetics) smarter and more sensitive.

A study out this week in the journal small presents a new type of artificial skin created by a team at Nanyang Technological University in Singapore that can not only sense direct pressure being applied on it, but also when objects are getting close to it. 

[Related: One of Facebook’s first moves as Meta: Teaching robots to touch and feel]

Already, various artificial skin mockups in the past have been able to pick up on factors like temperature, humidity, surface details and force, and turn those into digital signals. In this case, this artificial skin is “iontronic,” which means that it integrates ions and electrodes to try to enable sense. 

Specifically, it’s made up of a porous, spongy layer soaked with salty liquid sandwiched between two fabric electrode layers embedded with nickel. These raw components are low-cost, and easily scalable, which the researchers claim makes this type of technology suitable for mass production. The result is a material that is bendy, soft, and conductive. The internal chemistry of the structure makes it so that when there is pressure applied onto the material, it induces a change in capacitance, producing an electric signal. 

“We created artificial skin with sensing capabilities superior to human skin. Unlike human skin that senses most information from touching actions, this artificial skin also obtains rich cognitive information encoded in touchless or approaching operations,” corresponding author Yifan Wang, an assistant professor at Nanyang Technological University, in Singapore said in a press release. “The work could lead to next-generation robotic perception technologies superior to existing tactile sensors.”

The design of the device also creates a “fringing electric field” around the edge of the skin. This resulting electric field can sense when objects get close and can also discern the material that the object is made of. For example, it can distinguish between a plastic, metal, and human skin in a small proof-of-concept demo. 

As for use cases, the artificial skin can be put onto robot fingers or on a control interface for an electronic game that uses the touch of the finger to move the characters. In their experiment, users played the game Pac-Man, and navigated through electronic maps by interacting with a panel of the artificial skin. 

The post A new artificial skin could be more sensitive than the real thing appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A squishy new robot uses syringes and physics to mosey along https://www.popsci.com/technology/soft-robot-syringe-pump/ Tue, 24 Jan 2023 23:00:00 +0000 https://www.popsci.com/?p=507611
cornell soft robot
Fluids help this robot move. Cornell University

A new invention from engineers at Cornell moves by pumping fluids.

The post A squishy new robot uses syringes and physics to mosey along appeared first on Popular Science.

]]>
cornell soft robot
Fluids help this robot move. Cornell University

When we think of robots, we typically think of clunky gears, mechanical parts, and jerky movements. But a new generation of robots have sought to break that mold. 

Since Czech playwright Karel Čapek first coined the term “robot” in 1920, these machines have evolved into many forms and sizes. Robots can now be hard, soft, large, microscopic, disembodied or human-like, with joints controlled by a range of unconventional motors like magnetic fields, air, or light

A new six-legged soft robot from a team of engineers at Cornell University has put its own spin on motion, using fluid-powered motors to achieve complex movements. The result: A free-standing bug-like contraption carrying a backpack with a battery-powered Arbotix-M controller and two syringe pumps on top. The syringes pump fluid in and out of the robot’s limbs as it ambles along a surface at a rate of 0.05 body lengths per second. The design of the robot was described in detail in a paper published in the journal Advanced Intelligent Systems last week. 

Robots photo
Cornell University

The robot was born out of Cornell’s Collective Embodied Intelligence Lab, which is exploring ways that robots can think and collect information about the environment with other parts of their body outside of a central “brain,” kind of like an octopus. In doing this, the robot would rely on its version of reflexes, instead of on heavy computation, to calculate what to do next. 

[Related: This magnetic robot arm was inspired by octopus tentacles]

To build the robot, the team created six hollowed-out silicone legs. Inside the legs are fluid-filled bellows (picture the inside of an accordion) and interconnecting tubes arranged into a closed system. The tubes alter the viscosity of the fluid flowing in the system, contorting the shape of the legs; the geometry of the bellows structure allows fluid from the syringe to move in and out in specific ways that adjust the position and pressure inside each leg, making them extend stiffly or deflate into their resting state. Coordinating different, alternating combinations of pressure and position creates a cycled program that makes the legs, and the robot, move.  

According to a press release, Yoav Matia, a postdoctoral researcher at Cornell and an author on the study, “developed a full descriptive model that could predict the actuator’s possible motions and anticipate how different input pressures, geometries, and tube and bellow configurations achieve them–all with a single fluid input.”

Because of the flexibility of these rubber joints, the robot is also able to switch its gait, or walking style, depending on the landscape or nature of the obstacles it’s traversing. The researchers say that the technology behind these fluid-based motors and nimble limbs can be applied to a range of other applications, such as 3D-printed machines and robot arms.

The post A squishy new robot uses syringes and physics to mosey along appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
From film to forensics, here’s how lidar laser systems are helping us visualize the world https://www.popsci.com/technology/lidar-use-cases/ Sat, 21 Jan 2023 12:00:00 +0000 https://www.popsci.com/?p=506859
movie set
A movie production ongoing. DEPOSIT PHOTOS

The technology is being applied in wacky and unexpected ways.

The post From film to forensics, here’s how lidar laser systems are helping us visualize the world appeared first on Popular Science.

]]>
movie set
A movie production ongoing. DEPOSIT PHOTOS

Lidar, a way to use laser light to measure how far away objects are, has come a long way since it was first put to work on airplanes in 1960. Today, it can be seen mounted on drones, robots, self-driving cars, and more. Since 2016, Leica Geosystems has been thinking of ways to apply the technology to a range of industries, from forensics, to building design, to film. (Leica Geosystems was acquired by Swedish industrial company Hexagon in 2005 and is separate from Leica Microsystems and Leica Camera). 

To do this, Leica Geosystems squeezed the often clunky, 3D-scanning lidar technology down to a container the size of a soft drink can. A line of products called BLK is specifically designed for the task of “reality capture,” and is being used in a suite of ongoing projects, including the mapping of ancient water systems hidden beneath Naples that were used to naturally cool the city, the explorations of Egyptian tombs, and the modeling of the mysterious contours of Scotland’s underground passages, as Wired UK recently covered. 

The star of these research pursuits is the BLK360, which like a 360-degree camera, swivels around on a tripod to image its surroundings. Instead of taking photos, it’s measuring everything with lasers. The device can be set up and moved around to create multiple scans that can be compiled in the end to construct a 3D model of an environment. “That same type of [lidar] sensor that’s in the self-driving car is used in the BLK360,” says Andy Fontana, Reality Capture specialist at Leica Geosystems. “But instead of having a narrow field of view, it has a wide field of view. So it goes in every direction.”

[Related: Stanford researchers want to give digital cameras better depth perception]

Besides the BLK360, Leica Geosystems also offers a flying sky scanner, a scanner for robots, and a scanner that can be carried and work on the go. For the Rhode Island School of Design-led team studying Naples waterways, they’re using both the BLK360 and the to-go devices in order to scan as much of the city as possible. Figuring out the particular designs ancient cities used to create conduits for water as a natural cooling infrastructure can provide insights on how modern cities around the world might be able to mitigate the urban heat island effect. 

Engineering photo
Leica Geosystems

Once all the scans comes off the devices, they exist as a 3D point cloud—clusters of data points in space. This format is frequently used in the engineering industry, and it can also be used to generate visualizations, like in the Scotland souterrain project. “You can see that it’s pixelated. All of those little pixels are measurements, individual measurements. So that’s kind of what a point cloud is,” Fontana explains. “What you can do with this is convert it and actually make it into a 3D surface. This is where you can use this in a lot of other applications.”

[Related: A decked out laser truck is helping scientists understand urban heat islands]

Lidar has become an increasingly popular tool in archaeology, as it is able to procure more accurate dimensions of a space than images alone with scans that take less than a minute—and can be triggered remotely from a smartphone. But Leica Geosystems has found an assortment of useful applications for this type of 3D data.

One of the industries interested in this tech is film. Imagine this scenario: a major studio constructs an entire movie set for an expensive action film. Particular structures and platforms are needed for a specific scene. After the scene is captured, the set gets torn down to make room for another set to be erected. If in the editing process, it’s decided that the footage that was taken is actually not good enough, then the crew would have to rebuild that whole structure, and bring people back—a costly process. 

However, another option now is for the movie crew to do a scan of every set they build. And if they do  miss something or need to make a last-minute addition, they can use the 3D scan to edit the scene virtually on the computer. “They can fix things in CGI way easier than having to rebuild [the physical set],” Fontana says. “And if it’s too big a lift to do it on the computer, they can rebuild it really accurately because they have the 3D data.” 

[Related: These laser scans show how fires have changed Yosemite’s forests]

Other than film, forensics is a big part of Leica Geosystems’ business. Instead of only photographing a crime scene, what they do now is they scan a scene, and this is done for a couple of different reasons. “Let’s say it’s a [car] crash scene. If they take a couple of scans you can have the entire scene captures in 2 minutes in 3D. And then you can move the cars out of the way of traffic,” says Fontana. “That 3D data can be used in court. In the scan, you can even see skidmarks. You can see that this person was braking and there were these skidmarks, and they can calculate the weight of the car, compared to the length of the skidmark, to see how fast they were going.” 

With more graphic situations, like a murder or a shooting, this 3D data can be used to create “cones that show a statistical confidence of where that bullet came from based on how it hit the wall,” he says. 

As lidar continues to be expanded in tried and true applications, the growing variety of use cases will hopefully inspire innovators to think of ever more new approaches for this old tech. 

The post From film to forensics, here’s how lidar laser systems are helping us visualize the world appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
It’s not a UFO—this drone is scooping animal DNA from the tops of trees https://www.popsci.com/technology/e-dna-drone-tree-top/ Wed, 18 Jan 2023 22:22:15 +0000 https://www.popsci.com/?p=506207
drone on branch
An eDNA sampling drone perched on a branch. ETH Zurich

This flying robot can help ecologists understand life in forest canopies.

The post It’s not a UFO—this drone is scooping animal DNA from the tops of trees appeared first on Popular Science.

]]>
drone on branch
An eDNA sampling drone perched on a branch. ETH Zurich

If an animal passes through the forest and no one sees it, does it leave a mark? A century ago, there would be no way to pick up whatever clues were left behind. But with advancements in DNA technology, particularly environmental DNA (eDNA) detecting instruments, scientists can glean what wildlife visited an area based on genetic material in poop as well as microscopic skin and hair cells that critters shed and leave behind. For ecologists seeking to measure an ecosystem’s biodiversity as non-invasively as possible, eDNA can be a treasure trove of insight. It can capture the presence of multiple species in just one sample. 

But collecting eDNA is no easy task. Forests are large open spaces that aren’t often easily accessible (canopies, for example, are hard to reach), and eDNA could be lurking anywhere. One way to break up this big problem is to focus on a particular surface in the forest to sample eDNA from, and use a small robot to go where humans can’t. That’s the chief strategy of a team of researchers from ETH Zurich, the Swiss Federal Institute for Forest, Snow and Landscape Research WSL, and company SPYGEN. A paper on their approach was published this week in the journal Science Robotics

In aquatic environments, eDNA-gathering robots sip and swim to do their jobs. But to reach the treetops, not only do researchers have to employ flying drones (which are tougher to orient and harder to protect), these drones also need to be able to perch on a variety of surfaces. 

[Related: These seawater-sipping robots use drifting genes to make ocean guest logs]

The design the Swiss team came up with looks much like a levitating basket, or maybe a miniature flying saucer. They named this 2.6-pound contraption eDrone. It has a cage-like structure made up of four arcs that extend out below the ring mainframe that measure around 17 inches in diameter. The ring and cage-like body protect it and its four propellers from obstacles, kind of like the ring around a bumper car. 

To maneuver, the eDrone uses a camera and a “haptic-based landing strategy,” according to the paper, that can perceive the position and magnitude of forces being applied to the body of the robot in order to map out the appropriate course of action. To help it grip, there are also features like non-slip material, and carbon cantilevers on the bottom of each arc. 

Once it firmly touches down, the drone uses a sticky material on each arc to peel off an eDNA sample from the tree branch and stow it away for later analysis. In a small proof-of-concept run, the eDrone was able to successfully obtain eDNA samples from seven trees across three different families. This is because different tree species have their own branch morphologies (some are cylindrical and others have more irregular branches jutting out). Different trees also host different animals and insects. 

“The physical interaction strategy is derived from a numerical model and experimentally validated with landings on mock and real branches,” the researchers wrote in the paper.  “During the outdoor landings, eDNA was successfully collected from the bark of seven different trees, enabling the identification of 21 taxa, including insects, mammals, and birds.”

Although the robot did its intended job well in these small trials, the researchers noted that there needs to be more extensive studies into how its performance may be affected by tree species beyond the ones they tested for or by changing environmental conditions like wind or overcast skies. Moreover, eDNA gathering by robot, they propose, can be an additional way to sample eDNA in forests alongside other methods like analyzing eDNA from pooled rainwater

“By allowing these robots to dwell in the environment, this biomonitoring paradigm would provide information on global biodiversity and potentially automate our ability to measure, understand, and predict how the biosphere responds to human activity and environmental changes,” the team wrote. 

Watch the drone in action below: 

The post It’s not a UFO—this drone is scooping animal DNA from the tops of trees appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This startup plans to collect carbon pollution from buildings before it’s emitted https://www.popsci.com/technology/carbonquest-building-carbon-capture/ Fri, 13 Jan 2023 15:00:00 +0000 https://www.popsci.com/?p=505296
high rise buildings in new york
High rise buildings generate a good proportion of greenhouse gas emissions. Dan Calderwood / Unsplash

CarbonQuest's carbon capture system is being installed at five other locations after a successful pilot test.

The post This startup plans to collect carbon pollution from buildings before it’s emitted appeared first on Popular Science.

]]>
high rise buildings in new york
High rise buildings generate a good proportion of greenhouse gas emissions. Dan Calderwood / Unsplash

After a successful pilot run, a start-up called CarbonQuest is expanding its footprint in New York City. Its mission? Outfit high-rise buildings in carbon capture technology.

Carbon capture tech aims to do exactly what the name implies: capture carbon dioxide emissions generated by burning fossil fuels. It’s just one of the many experimental methods to combat the climate crisis. In this case, instead of grabbing carbon out of the air, the goal would be to prevent it from being expelled by the building in the first place.

“The deal marks the first multibuilding deployment of a new technology that could prove crucial for eliminating carbon emissions from buildings,” Canary Media reported. 

One year ago, the company launched its first working system at 1930 Broadway, which is a more than 350,000 square foot luxury apartment building in Lincoln Square owned by Glenwood Management. The heart of the technology is tucked away in the building basement, taking up the equivalent of three parking spaces. 

While the building operates as usual, CarbonQuest’s equipment would capture emissions generated by activities like cooking and heating with natural gas, for example, filter out the carbon dioxide from the mix of exhaust gasses, and turn it into a liquid form by applying pressure. The company uses software to verify, measure, and report carbon dioxide emissions to third party verifiers, auditors, and regulators. Since its technology provides “point source capture,” this would in theory prevent any carbon dioxide from being discharged into the atmosphere. (Read PopSci’s guide to carbon capture and storage here.)

[Related: The truth about carbon capture technology]

This liquid carbon dioxide can then be re-used in applications like specially formulated concrete blocks, sustainable jet fuel, chemical manufacturing, algae bioreactors, and more. Currently, Glenwood Management sells its liquid carbon dioxide to concrete maker Glenwood Mason Supply (same name, but unrelated to the management company). 

Some studies suggest that injecting carbon dioxide into concrete can alter its properties, making it stronger than traditional concrete. The Department of Transportation in the City of San Jose has even used carbon dioxide-infused concrete for their ramps

Structures like 1930 Broadway are facing growing pressure to become more sustainable, with the latest incentive coming from NYC’s Local Law 97 requiring large buildings to meet new energy efficiency and greenhouse gas emissions limits by 2024. Those limits will become even more strict in 2030. (The Biden administration rolled out similar emission-cutting regulations around federal buildings.)

According to the City of New York, an estimated “20-25 percent of buildings will exceed their emissions limits in 2024, if they take no action to improve their building’s performance. In 2030, if owners take no action to make improvements, approximately 75-80 percent of buildings will not comply with their emission limits.” 

In the 1930 Broadway case study, technology implemented by CarbonQuest is “expected to cut 60-70 percent of CO2 emissions from natural gas usage” and reduce a building’s annual carbon emissions by 25 percent. Without such an instrument in place, the building could be penalized hundreds of thousands of dollars by the city every year after 2024. Glenwood Management has already ordered five more systems for other rental properties in the city to be installed by March of 2023, according to an announcement earlier this month.

“The installations at these buildings — which include The Fairmont (300 East 75th Street), The Bristol (300 East 56th Street), The Paramount Tower (240 East 39th Street), The Barclay (1755 York Avenue) and The Somerset (1365 York Avenue) — come on the heels of the success of Glenwood’s pilot project with CarbonQuest at The Grand Tier (1930 Broadway), which is the first commercially operational building carbon capture project on the market,” the companies elaborated in the press release

Buildings are the greatest source of carbon emissions in NYC by proportion—they make up around 70 percent of the city’s greenhouse gas emission (to compare, here are the greatest sources of carbon emissions nationally). 

Engineers have been brainstorming ways to make high-rise buildings greener, including rethinking design, construction materials, and the construction process. Others have considered the integration of plants and outer skins on the surfaces of these skyscrapers to help it conserve energy. CarbonQuest claims that it hasn’t had any direct competitors in the building space, although many big companies have been investing in nascent technologies that can remove and repurpose greenhouse gas emissions. 

Ultimately, capturing carbon emissions would be a separate, more non-disruptive way of reducing emissions compared to electrification or heat pumps, though it alone is not likely the end-all solution. “Building Carbon Capture can provide a cost-effective means of providing immediate reductions while the grid, over time, becomes greener,” the company noted in its FAQ page

The post This startup plans to collect carbon pollution from buildings before it’s emitted appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The ‘Doomsday’ glacier is fracturing and changing. AI can help us understand how. https://www.popsci.com/technology/ai-thwaites-glacier/ Tue, 10 Jan 2023 19:00:00 +0000 https://www.popsci.com/?p=504366
An image of the Thwaites Glacier from NASA
Thwaites Glacier. NASA

Machine learning could be used to take a more nuanced look at the satellite images of the glacier beneath the ice and snow.

The post The ‘Doomsday’ glacier is fracturing and changing. AI can help us understand how. appeared first on Popular Science.

]]>
An image of the Thwaites Glacier from NASA
Thwaites Glacier. NASA

The Doomsday glacier has been on everyone’s minds lately. And it should be. With estimates that by 2100, sea levels will rise by 10 feet due mostly to meltwater from it, there’s much to worry about. Because of the precarious location of this Florida-sized ice shelf, when it goes, it will set off a chain of melting events. 

In the past few years, teams of researchers have been racing against time to study and understand the Thwaites Glacier—the formal name of the so-called Doomsday glacier—and have dispatched several tools to help them do so, including an auto-sub named Boaty McBoatFace

Previous work has been focused around figuring out how the glacier is melting, and how it’s affecting the seawater ecology in its immediate environment. Now, a new study in Nature Geoscience is using machine learning to analyze the ways in which the ice shelf has fractured and reconsolidated over a span of six years. Led by scientists from University of Leeds and University of Bristol, this research employs an AI algorithm that looks at satellite imagery to monitor and model the ways the glacier has been changing, and mark where notable stress fractures have been occurring. 

The artificial intelligence algorithm has an interesting backstory: According to a press release, this AI was adapted from an algorithm that was originally used to identify cells in microscope images. 

[Related: We’re finally getting close-up, fearsome views of the doomsday glacier]

During the study, the team from Leeds and Bristol closed in on an area of the glacier where ”the ice flows into the sea and begins to float.” This is also the start of two ice shelves: the Thwaites Eastern ice shelf and the Thwaites Glacier ice tongue. “Despite being small in comparison to the size of the entire glacier, changes to these ice shelves could have wide-ranging implications for the whole glacier system and future sea-level rise,” the researchers explained in the press release.    

Here’s where the AI comes in handy—it could take a more nuanced look at the satellite images of the glacier beneath the ice and snow. By doing so, it allowed scientists to perceive how different elements of the ice sheet have interacted with one another over the years. For example, in times when the ice flow is faster or slower than average, more fractures tend to form. And having more fractures in turn can alter the speed of ice flow. Moreover, using AI can allow them to quickly make sense of underlying patterns influencing glacier melting from the “deluge of satellite images” they receive each week. A more in-depth look at the model they developed for the study is available here

In an area like Antarctica that humans have a hard time accessing, remote, automated technologies have become a vital way to keep an eye on events that could have impacts globally. Outside of diving robots and roving satellites, scientists are also using animal-watching drones, balloons, and more. Plus, there’s a plan to get more high-speed internet to Antarctica’s McMurdo Station to make the process of transmitting data to the outside world easier.

The post The ‘Doomsday’ glacier is fracturing and changing. AI can help us understand how. appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This AI is no doctor, but its medical diagnoses are pretty spot on https://www.popsci.com/technology/ai-doctor-google-deepmind/ Fri, 06 Jan 2023 15:00:00 +0000 https://www.popsci.com/?p=503684
doctor on computer
Can AI diagnose medical conditions better than a human?. DEPOSIT PHOTOS

Asking an AI about your health issues might be better than WebMD, but it does come with some caveats.

The post This AI is no doctor, but its medical diagnoses are pretty spot on appeared first on Popular Science.

]]>
doctor on computer
Can AI diagnose medical conditions better than a human?. DEPOSIT PHOTOS

Various research groups have been teasing the idea of an AI doctor for the better half of the past decade. In late December, computer scientists from Google and DeepMind put forth their version of an AI clinician that can diagnose a patient’s medical conditions based on their symptoms, using a large language model called PaLM

Per a preprint paper published by the group, their model scored 67.6 percent on a benchmark test containing questions from the US Medical License Exam, which they claim surpassed previous state-of-the-art software by 17 percent. One version of it performed at a similar level to human clinicians. But, there are plenty of caveats that come with this algorithm, and others like it. 

Here are some quick facts about the model: It was trained on a dataset of over 3,000 commonly searched medical questions, and six other existing open datasets for medical questions and answers, including medical exams and medical research literature. In their testing phase, the researchers compared the answers from two versions of the AI to a human clinician, and evaluated these responses for accuracy, factuality, relevance, helpfulness, consistency with current scientific consensus, safety, and bias. 

Adriana Porter Felt, a software engineer that works on Google Chrome who was not a part of the paper, noted on Twitter that the version of the model that answered medical questions similarly to human clinicians accounts for the added feature of “instruction prompt tuning, which is a human process that is laborious and does not scale.” This includes carefully tweaking the wording of the question in a specific way that allows the AI to retrieve the correct information. 

[Related: Google is launching major updates to how it serves health info]

The researchers even wrote in the paper that their model “performs encouragingly, but remains inferior to clinicians,” and that the model’s “comprehension [of medical context], recall of knowledge, and medical reasoning improve with model scale and instruction prompt tuning.” For example, every version of the AI missed important information and included incorrect or inappropriate content in their answers at a higher rate compared to humans. 

Language models are getting better at parsing information with more complexity and volume. And they seem to do okay with tasks that require scientific knowledge and reasoning. Several small models, including SciBERT and PubMedBERT, have pushed the boundaries of language models to understand texts loaded with jargon and specialty terms.  

But in the biomedical and scientific fields, there are complicated factors at play and many unknowns. And if the AI is wrong, then who takes responsibility for malpractice? Can the source of the error be traced back to a source when much of the algorithm works like a black box? Additionally, these algorithms (mathematical instructions given to the computer by programmers) are imperfect and need complete and correct training data, which is not always available for various conditions across different demographics. Plus, buying and organizing health data can be expensive

Answering questions correctly on a multiple-choice standardized test does not convey intelligence. And the computer’s analytical ability might fall short if it were presented with a real-life clinical case. So while these tests look impressive on paper, most of these AIs are not ready for deployment. Consider IBM’s Watson AI health project. Even with millions of dollars in investment, it still had numerous problems and was not practical or flexible enough at scale (it ultimately imploded and was sold for parts). 

Google and DeepMind do recognize the limitations of this technology. They wrote in their paper that there are still several areas that need to be developed and improved for this model to be actually useful, such as the grounding of the responses in authoritative, up-to-date medical sources and the ability to detect and communicate uncertainty effectively to the human clinician or patient. 

The post This AI is no doctor, but its medical diagnoses are pretty spot on appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Only 4 people have been able to solve this 1934 mystery puzzle. Can AI do better? https://www.popsci.com/technology/cains-jawbone-ai/ Wed, 04 Jan 2023 20:00:00 +0000 https://www.popsci.com/?p=503094
murder mystery illustration
"Cain's Jawbone" has recently been popularized thanks to TikTok. DEPOSIT PHOTOS

'Cain’s Jawbone' is a scrambled whodunnit that claims to be decipherable through 'logic and intelligent reading.'

The post Only 4 people have been able to solve this 1934 mystery puzzle. Can AI do better? appeared first on Popular Science.

]]>
murder mystery illustration
"Cain's Jawbone" has recently been popularized thanks to TikTok. DEPOSIT PHOTOS

In the 1930s, British crossword writer Edward Powys Mathers created a “fiendishly difficult literary puzzle” in the form of a novel called “Cain’s Jawbone.” The trick to unraveling the whodunnit involves piecing the 100 pages of the book together in the correct order to reveal the six murders and how they happened. 

According to The Guardian, only four (known) people have been able to solve this since the book was first published. But the age-old mystery saw a resurgence of interest after it was popularized through TikTok by user Sarah Scannel, prompting a 70,000-copies reprint by Unbound. The Washington Post reported last year that this novel has quickly gained a cult following of sorts, with the new wave of curious sleuths openly discussing their progress in online communities across social media. On sites like Reddit, the subreddit r/CainsJawbone has more than 7,600 members. 

So can machine learning help crack the code? A small group of people are trying it out. Last month, publisher Unbound partnered with AI-platform Zindi to challenge readers to sort the pages using AI natural language processing algorithms. TikTok user blissfullybreaking explained in a video that one of the advantages of using AI is that it can pick up on 1930s pop culture references that we might otherwise miss, and cross-reference that to relevant literature made during that time. 

[Related: Meta wants to improve its AI by studying human brains]

And it’s a promising approach. Already, natural language processing models have been able to successfully parse reading comprehension tests, pass college entrance exams, simplify scientific articles (with varying accuracy), draft up legal briefings, brainstorm story ideas, and play a chat-based strategic board game. AI can also be a fairly competent rookie sleuth, that is if you give it enough CSI to binge

Zindi required the solution to be open-source and publicly available, and teams could only use the datasets they provided for this competition. Additionally, the submitted code that yielded their result must be reproducible, with full documentation of what data was used, what features were implemented, and the environment in which the code was run. 

One member of the leading team, user “skaak,” explained how he tackled this challenge in a discussion post on Zindi’s website. He noted that after experimenting with numerous tweaks to his team’s model, his conclusion was that there is still a “human calibration” needed to guide the model through certain references and cultural knowledge.

The competition closed on New Year’s Eve with 222 enrolled participants, although scoring will be finalized later in January, so stay tuned for tallies and takeaways later this month.

The post Only 4 people have been able to solve this 1934 mystery puzzle. Can AI do better? appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Monitoring volcanoes that could explode? A drone is on it. https://www.popsci.com/technology/drone-volcano-eruption/ Fri, 23 Dec 2022 00:00:00 +0000 https://www.popsci.com/?p=501461
volcano erupting
Volcano eruptions can be scary if you don't know they're coming. Izabela Kraus / Unsplash

By keeping track of the ratio of certain gasses, it can predict when a volcano is likely to erupt.

The post Monitoring volcanoes that could explode? A drone is on it. appeared first on Popular Science.

]]>
volcano erupting
Volcano eruptions can be scary if you don't know they're coming. Izabela Kraus / Unsplash

Volcano eruptions are dramatic, messy events. And worse, they’re often unpredictable. Despite humanity’s best efforts to understand them, volcanoes continue to be a big threat—one that most people are not adequately prepared for. Historic ice cores tell us that the biggest explosions yet are still to come. Over the years, scientists have devised software, computer simulations, and even special instruments to monitor and predict when these sleeping beasts may wake. Researchers from Johannes Gutenberg University Mainz have come up with another technique: drones. 

In a study published in late October in Scientific Reports, a team of scientists showed that small drones can be used to characterize the chemistry of volcanic plumes. Properties like the ratio of carbon dioxide to sulfur dioxide can give clues on what reactions are happening under the surface, and whether an eruption is coming soon. This ratio sometimes changes quickly before a volcano blows. 

Robots photo
Research drone flying in tests on Vulcano, Italy. Hoffmann group

Big, heavy-duty drones can often be a hassle to transport in and around the terrain surrounding volcanoes. And having humans trek out in special gear and suits is not an easy or quick journey either. Using a drone that could fit into a backpack could simplify the process. The drone used in the experiment was a 2-pound commercial model called DJI Mavic 3. 

Of course, the flying gizmo had to undergo a few modifications before it was volcano-ready. It’s decked out in a sensor system coordinated by a 4 MB microcontroller that bridges the communications between the electrochemical sulfur dioxide sensor, a light-based carbon dioxide sensor, plus other instruments for measuring temperature, humidity and pressure, and a GPS module.

The drone boasts a relatively high-frequency sampling rate: 0.5 Hz. And its battery allows it to run for 1.5 hours. The team tested the system on the island of Vulcano, Italy in April 2022 and flew it into a fumarole field, where volcanic gasses and vapors are emitted from openings in the ground. During its test flight it was able to quickly and accurately measure the gaseous emissions from the fumarole field in order to monitor volcanic activity. 

[Related: Whale-monitoring robots are oceanic eavesdroppers with a mission]

Drones are being used more and more as the eyes in the skies above hazardous locations; they have proved to be a practical solution for monitoring the real-time developments of phenomena and disasters like fog, wildfires, and hurricanes. They’ve also been used to monitor day-to-day happenings in the natural world like shark activity and changes to water sources

Off-the-shelf drones have become an extremely valuable tool for scientists hoping to collect data in hard-to-access places like the polar region and remote locations where wildlife congregate. But the certification process to use drones through the FAA is sometimes a hurdle to these tools becoming more commonplace among more researchers. 

The group from Johannes Gutenberg University Mainz isn’t the first to use drones to study volcanoes. For example, an international team of researchers used the flying tools to study the structural integrity of volcanic domes, and NASA has used small airplane-like drones to capture visible-light and thermal images of volcanic areas from above. 

Hopefully, if the science done via drone become refined and robust enough, it could help researchers actually predict eruptions well before they happen.

The post Monitoring volcanoes that could explode? A drone is on it. appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Why bills to totally ban TikTok in the US might do more harm than good https://www.popsci.com/technology/bills-tiktok-ban/ Fri, 16 Dec 2022 00:00:00 +0000 https://www.popsci.com/?p=499233
tiktok app screen on smartphone
Lawmakers are concerned over security and privacy issues with the popular app TikTok. Solen Feyissa / Unsplash

Recently proposed policies might not solve bigger security concerns.

The post Why bills to totally ban TikTok in the US might do more harm than good appeared first on Popular Science.

]]>
tiktok app screen on smartphone
Lawmakers are concerned over security and privacy issues with the popular app TikTok. Solen Feyissa / Unsplash

Talks around banning TikTok have been going on since the Trump administration. Over the past five years, the federal government has taken a series of actions to alleviate concerns over spying, including a still-in-progress deal to transfer US users’ data on the social video-sharing app to an American company, and a recent Senate hearing with the company’s chief operating officer. 

However, not everyone was satisfied with the requirements in the potential data-transfer agreement, and skeptics aren’t convinced by TikTok’s process for handling users’ personal information. Senator Marco Rubio (R-FL), the Intelligence Committee’s top Republican, stated to The New York Times earlier this year that unless the tie between TikTok and ByteDance (the Chinese company that currently owns the app) is completely severed, “significant national security issues regarding operations, data, and algorithms [will still be] unresolved.”

This week, Rubio pushed even further by introducing a bill that proposes to put a nation-wide ban on TikTok and any other apps or platforms owned by ByteDance. (The name is a mouthful: Averting the National Threat of Internet Surveillance, Oppressive Censorship and Influence, and Algorithmic Learning by the Chinese Communist Party Act.) If it passes, it would block and prohibit “all transactions from any social media company in, or under the influence of, China, Russia, and several other foreign countries of concern.” Representative Mike Gallagher (R-WI) and Representative Raja Krishnamoorthi (D-IL) introduced companion legislation in the House.  

TikTok, the envy of older, beleaguered apps like Facebook and YouTube, has become a de facto search engine and source of news for younger users. But it has drawn both positive and negative attention since it first launched stateside in 2016. 

Critics of the recent bill note that this kind of broad ban will mostly impact more than one billion everyday users, especially those in the younger generations that have been using TikTok as a stage for political activism, social commentary, and other forms of constitutionally protected expression. Moreover, Techdirt notes that hyper-focusing on just one app ignores the bigger problem that pervades many modern technology companies (including American ones) that sell and broker data. 

[Related: How data brokers threaten your privacy]

“TikTok’s security, privacy, and its relationship with the Chinese government is indeed concerning, but a total ban is not the answer,” Electronic Frontier Foundation Deputy Executive Director and General Counsel Kurt Opsahl tells PopSci. “A total ban is not narrowly tailored to the least restrictive means to address the security and privacy concerns, and instead lays a censorial blow against the speech of millions of ordinary Americans.” He declined to comment on whether the bill could actually pass.

Hilary McQuaide, a spokesperson for TikTok, told multiple outlets that she felt this bill was rash considering that there is an ongoing national security review by the Biden administration. ​​“We will continue to brief members of Congress on the plans that have been developed under the oversight of our country’s top national security agencies—plans that we are well underway in implementing—to further secure our platform in the US,” McQuaide said to CNN.

Rubio is not the only congressperson making efforts to corral the influence of TikTok. In an attempt to deal with the larger national security concerns that have been brought up during the Intelligence Committee hearing with TikTok, on Wednesday, the Senate unanimously passed a bill introduced by Josh Hawley (R-Mo.) that would ban the download and use of the app on government-issued devices. The legislation has to go through the House and President Joe Biden’s approval before it can become law. 

The post Why bills to totally ban TikTok in the US might do more harm than good appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This fossil-sorting robot can identify millions-year-old critters for climate researchers https://www.popsci.com/technology/forabot-sort-foram-fossil/ Tue, 13 Dec 2022 20:00:00 +0000 https://www.popsci.com/?p=498405
Foraminiferas are tiny marine organisms with intricate shells.
Foraminiferas are tiny marine organisms with intricate shells. Josef Reischig / Wikimedia Czech Republic

Forabot’s job is to image, ID, and categorize the tiny shells left behind by marine organisms called foraminiferas.

The post This fossil-sorting robot can identify millions-year-old critters for climate researchers appeared first on Popular Science.

]]>
Foraminiferas are tiny marine organisms with intricate shells.
Foraminiferas are tiny marine organisms with intricate shells. Josef Reischig / Wikimedia Czech Republic

Tiny marine fossils called foraminifera, or forams, have been instrumental in guiding scientists studying global climate through the ages. The oldest record of their existence, evident through the millimeter-wide shells they leave behind when they die, date back more than 500 million years. In their heyday, these single-celled protists thrived across many marine environments—so much so that a lot of seafloor sediments are comprised of their remains

The shells, which are varied and intricate, can provide valuable insights into the state of the ocean, along with its chemistry and temperature, during the time that the forams were alive. But so far, the process of identifying, cataloging, and sorting through these microscopic organisms has been a tedious chore for research labs around the world. 

Now, there is hope that the menial job may get outsourced to a more mechanical workforce in the future. A team of engineers from North Carolina State University and University of Colorado Boulder has built a robot specifically designed to isolate, image, and classify individual forams by species. It’s called the Forabot, and is constructed from off-the-shelf robotics components and a custom artificial intelligence software (now open-source). In a small proof-of-concept study published this week in the journal Geochemistry, Geophysics, Geosystems, the technology had an ID accuracy of 79 percent. 

“Due to the small size and great abundance of planktic foraminifera, hundreds or possibly thousands can often be picked from a single cubic centimeter of ocean floor mud,” the authors wrote in their paper. “Researchers utilize relative abundances of foram species in a sample, as well as determine the stable isotope and trace element compositions of their fossilized remains to learn about their paleoenvironment.”

[Related: Your gaming skills could help teach an AI to identify jellyfish and whales]

Before any formal analyses can happen, however, the foraminifera  have to be sorted. That’s where Forabot could come in. After scientists wash and sieve samples filled with sand-like shells, they place the materials into a container called the isolation tower. From there, single forams are transferred to another container called the imaging tower where an automated camera captures a series of shots of the specimen that’s then fed to the AI software for identification. Once the specimen gets classified by the computer, it is then shuttled to a sorting station, where it is dispensed into a corresponding well based on species. In its current form, Forabot can distinguish six different species of foram, and can process 27 forams per hour (quick math by the researchers indicate that it can go through around 600 fossils a day). 

For the classification software, the team modified a neural network called VGG-16 that had been pretrained on more than 34,000 planktonic foram images that were collected worldwide as part of the Endless Forams project. “This is a proof-of-concept prototype, so we’ll be expanding the number of foram species it is able to identify,” Edgar Lobaton, an associate professor at NC State University and an author on the paper, said in a press release. “And we’re optimistic we’ll also be able to improve the number of forams it can process per hour.”

Watch Forabot at work below:

The post This fossil-sorting robot can identify millions-year-old critters for climate researchers appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Watch this little drummer bot stay on beat https://www.popsci.com/technology/xiaomi-humanoid-robot-drum/ Sat, 10 Dec 2022 12:00:00 +0000 https://www.popsci.com/?p=496976
xiaomi's cyberone robot drumming
IEEE Spectrum / YouTube

Humanoid robots can be hard to train. Here's how Xiaomi taught CyberOne to play a drum set.

The post Watch this little drummer bot stay on beat appeared first on Popular Science.

]]>
xiaomi's cyberone robot drumming
IEEE Spectrum / YouTube

Humanoid robots have long been a passion project for tech giants and experimental engineers. And these days, it seems like everyone wants one. But for machines, social skills are hard to learn, and more often than not, these robots have a hard time fitting in with the human world. 

Xiaomi, a consumer electronics company based in China, teased that they were making such a machine in August. According to the company’s press release, the 5-foot, 8-inch bot, called CyberOne, is probably not intended to be all that useful, IEEE Spectrum reported, but rather it’s “a way of exploring possibilities with technology that may have useful applications elsewhere.” 

As for the robot’s specs, the company said that CyberOne comes with a “depth vision module” as well as an AI interaction algorithm. It can additionally support up to 21 degrees of freedom in motion and has a real-time response speed that “allows it to fully simulate human movements.” 

Xiaomi has just unveiled a new clip of CyberOne, and it’s slowly but aptly playing a multi-instrument drum set. It’s able to precisely coordinate a series of complex movements including hitting the drumsticks together, tapping the cymbals, foot pedal, and a set of four drums to make a range of sounds. And it’s certainly more elegant and evolved than other, scrappier (and sometimes disembodied) robot bands and orchestras of the past. 

[Related: How Spotify trained an AI to transcribe music]

So how does CyberOne know what to do? Xiaomi made a diagram showing how sound files become movements for CyberOne. First, drum position and strike speed commands are fine-tuned online. Then, these beats are fed to CyberOne via a MIDI file, which tells the computer what instrument was played, what notes were played on the instrument, how loud and how long they were played for, and with which effects, if any. The robot then uses an offline motion library to generate the moves for its performance, being careful to hit the correct instrument and in time. 

Executing instructions concisely and in a controlled, coordinated manner is a difficult exercise even for humans. Humanoid robots are different from regular bots because they’re meant to emulate natural movements, but can often be impractical in a real world setting. They need specialized training to do the simplest functions (like not falling over). A humanoid robot capable of honing in on a skill like playing drums could be useful for a variety of complex tasks that might involve manipulating or interacting with objects in its environment. 

“We are working on the second generation of CyberOne, and hope to further improve its locomotion and manipulation ability,” Zeyu Ren, a senior hardware engineer at the Xiaomi Robotics Lab, told IEEE Spectrum. “On the hardware level, we plan to add more degrees of freedom, integrate self-developed dexterous hands, and add more sensors. On the software level, more robust control algorithms for locomotion and vision will be developed.”

Watch CyberOne groove below:

The post Watch this little drummer bot stay on beat appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Magnetic microrobots could zap the bacteria out of your cold glass of milk https://www.popsci.com/technology/magnetic-microrobots-dairy/ Thu, 08 Dec 2022 00:00:00 +0000 https://www.popsci.com/?p=496186
milk products
Aleksey Melkomukov / Unsplash

These “MagRobots” can specifically target toxins in dairy that survive pasteurization.

The post Magnetic microrobots could zap the bacteria out of your cold glass of milk appeared first on Popular Science.

]]>
milk products
Aleksey Melkomukov / Unsplash

A perfect mix of chemistry and engineering has produced microscopic robots that function like specialized immune cells—capable of pursuing pathogenic culprits with a specific mugshot. 

The pathogen in question is Staphylococcus aureus (S. aureus), which can impact dairy cows’ milk production. These bacteria also make toxins that cause food poisoning and gastrointestinal illnesses in humans (that includes the usual trifecta of diarrhea, abdominal cramps, and nausea). 

Removing the toxins from dairy products is not easy to do. The toxins tend to be stable and can’t be eradicated by common hygienic practices in food production, like pasteurization and heat sterilization. However, an international group of researchers led by a team from the University of Chemistry and Technology Prague may have come up with another way to get rid of these pesky pathogens: with a tiny army of magnetic microrobots. Plus, each “MagRobot” is equipped with an antibody that specifically targets a protein on the S. aureus bacteria, like a lock-and-key mechanism. 

In a small proof-of-concept study published in the journal Small, the team detailed how these MagRobots could bind and isolate S. aureus from milk without affecting other microbes that may naturally occur.

Bacteria-chasing nanobots have been making waves lately in medicine, clearing wounds and dental plaque. And if these tiny devices can work in real, scaled up trials, as opposed to just in the lab, they promise to cut down on the use of antibiotics. 

In the past, microscopic robots have been propelled by light, chemicals, and even ultrasound. But these MagRobots are driven through a special magnetic field. The team thought this form for control was the best option to go with since the robots wouldn’t produce any toxic byproducts, and can be remotely accessed for reconfiguring and reprogramming. 

To make the MagRobots, paramagnetic microparticles are covered with a chemical compound that allows them to be coated with antibodies that match with proteins on the cell wall of S. aureus. This allows the MagRobot to find, bind, and retrieve the bacteria. A transversal rotating magnetic field with different frequencies is used to coordinate the bots. At higher frequencies, the MagRobots moved faster. Researchers preset the trajectory of the microrobots so that they would “walk” back and forth through a control solution and a container with milk in it in three rows and two columns. They are retrieved using a permanent magnet

During the experiment, MagRobots that measured 2.8 micrometers across were able to remove around 60 percent of the S. aureus cells in one hour. When the MagRobots were placed in a milk containing both S. aureus and another bacteria, E. coli, it was able to avoid the E. coli and solely go after the S. aureus. 

“These results indicate that our system can successfully remove S. aureus remaining after the milk has been pasteurized,” the researchers wrote. “Moreover, this fuel-free removal system based on magnetic robots is specific to S. aureus bacteria and does not affect the natural milk microbiota or add toxic compounds resulting from fuel catalysis.”

Additionally, they propose that this method can be applied to a variety of other pathogens simply by modifying the surfaces of these microrobots. 

The post Magnetic microrobots could zap the bacteria out of your cold glass of milk appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Scientists modeled a tiny wormhole on a quantum computer https://www.popsci.com/technology/wormhole-quantum-computer/ Thu, 01 Dec 2022 22:30:00 +0000 https://www.popsci.com/?p=493923
traversable wormhole illustration
inqnet/A. Mueller (Caltech)

It’s not a real rip in spacetime, but it’s still cool.

The post Scientists modeled a tiny wormhole on a quantum computer appeared first on Popular Science.

]]>
traversable wormhole illustration
inqnet/A. Mueller (Caltech)

Physicists, mathematicians, astronomers, and even filmmakers have long been fascinated by the concept of a wormhole: an unpredictable and oftentimes volatile phenomenon that is believed to create tunnels (and shortcuts between two distant locations) across spacetime. Another theory holds that if you link up two black holes in the right way, you can create a wormhole. 

Studying wormholes is like piecing together an incomplete puzzle without knowing what the final picture is supposed to look like. You can roughly deduce what’s supposed to go in the gaps based on the completed images around it, but you can’t know for sure. That’s because there has not yet been definitive proof that wormholes are in fact out there. However, some of the solutions to fundamental equations and theories in physics suggest that such an entity exists. 

In order to understand the properties of this cosmic phantom based on what has been deduced so far, researchers from Caltech, Harvard, MIT, Fermilab, and Google created a small “wormhole” effect between two quantum systems sitting on the same processor. What’s more, the team was able to send a signal through it. 

According to Quanta, this edges the Caltech-Google team ahead of an IBM-Quantinuum team that also sought to establish wormhole teleportation. 

While what they created is unfortunately not a real crack through the fabric of spacetime, the system does mimic the known dynamics of wormholes. In terms of the properties that physicists usually consider, like positive or negative energy, gravity and particle behavior, the computer simulation effectively looks and works like a tiny wormhole. This model, the team said in a press conference, is a way to study the fundamental problems of the universe in a laboratory setting. A paper describing this system was published this week in the journal Nature

“We found a quantum system that exhibits key properties of a gravitational wormhole yet is sufficiently small to implement on today’s quantum hardware,” Maria Spiropulu, a professor of physics at Caltech, said in a press release. “This work constitutes a step toward a larger program of testing quantum gravity physics using a quantum computer.” 

[Related: Chicago now has a 124-mile quantum network. This is what it’s for.]

Quantum gravity is a set of theories that posits how the rules governing gravity (which describes how matter and energy behave) and quantum mechanics (which describes how atoms and particles behave) fit together. Researchers don’t have the exact equation to describe quantum gravity in our universe yet. 

Although scientists have been mulling over the relationship between gravity and wormholes for around 100 years, it wasn’t until 2013 that entanglement (a quantum physics phenomenon), was thought to factor into the link. And in 2017, another group of scientists suggested that traversable wormholes worked kind of like quantum teleportation (in which information is transported across space using principles of entanglement). 

In the latest experiment, run on just 9 qubits (the quantum equivalent of binary bits in classical computing) in Google’s Sycamore quantum processor, the team used machine learning to set up a simplified version of the wormhole system “that could be encoded in the current quantum architectures and that would preserve the gravitational properties,” Spiropulu explained. During the experiment, they showed that information (in the form of qubits), could be sent through one system and reappear on the other system in the right order—a behavior that is wormhole-like. 

[Related: In photos: Journey to the center of a quantum computer]

So how do researchers go about setting up a little universe in a box with its own special rules and geometry in place? According to Google, a special type of correspondence (technically known as AdS/CFT) between different physical theories allowed the scientists to construct a hologram-like universe where they can “connect objects in space with specific ensembles of interacting qubits on the surface,” researchers wrote in a blog post. “This allows quantum processors to work directly with qubits while providing insights into spacetime physics. By carefully defining the parameters of the quantum computer to emulate a given model, we can look at black holes, or even go further and look at two black holes connected to each other — a configuration known as a wormhole.”

The researchers used machine learning to find the perfect quantum system that would preserve some key gravitational properties and maintain the energy dynamics that they wanted the model to portray. Plus, they had to simulate particles called fermions

The team noted in the press conference that there is strong evidence that our universe operates by similar rules as the hologram universe observed on the quantum chip. The researchers wrote in the Google blog item: “Gravity is only one example of the unique ability of quantum computers to probe complex physical theories: quantum processors can provide insight into time crystals, quantum chaos, and chemistry.” 

The post Scientists modeled a tiny wormhole on a quantum computer appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
What this jellyfish-like sea creature can teach us about underwater vehicles of the future https://www.popsci.com/technology/marine-animal-siphonophore-design/ Tue, 29 Nov 2022 20:00:00 +0000 https://www.popsci.com/?p=492881
A jellyfish-like sea creature that's classified as a siphonophore
NOAA Photo Library / Flickr

Nanomia bijuga is built like bubble wrap, and it's a master of multi-jet propulsion.

The post What this jellyfish-like sea creature can teach us about underwater vehicles of the future appeared first on Popular Science.

]]>
A jellyfish-like sea creature that's classified as a siphonophore
NOAA Photo Library / Flickr

Sea creatures have developed many creative ways of getting around their watery worlds. Some have tails for swimming, some have flippers for gliding, and others propel themselves using jets. That last transportation mode is commonly associated with squids, octopuses, and jellyfish. For years, researchers have been interested in trying to transfer this type of movement to soft robots, although it’s been challenging. (Here are a few more examples.) 

A team led by researchers from the University of Oregon have sought to get a closer understanding of how these gelatinous organisms are steering themselves about their underwater domains, in order to brainstorm better ways of designing underwater vehicles of the future. Their findings were published this week in the journal PNAS. The creature they focused on was Nanomia bijuga, a close relative of jellyfish, which looks largely like two rows of bubble wrap with some ribbons attached on one end of it. 

This bubble wrap body is known as the nectosome, and each individual bubble is called a nectophore. All of the nectophores can produce jets of water independently by expanding and contracting to direct currents of seawater through a flexible opening. Technically speaking, each nectophore is an organism in and of itself, and they’re bundled together into a colony. The Monterey Bay Aquarium Research Institute describes these animals as “living commuter trains.” 

The bubble units can coordinate to swim together as one, produce jets in sequence, or do their own thing if they want. Importantly, a few patterns of firing the jets produces the major movements. Firing pairs of nectophores in sequence from the tip to the ribbon tail enables Nanomia to swim forward or in reverse. Firing all the nectophores on one side, or firing some individual nectophores, turns and rotates its body. Using these commands for the multiple jets, Namonia can migrate hundreds of yards twice a day down to depths of 2,300 feet (which includes the twilight zone). 

For Namonia, the number of nectophores can vary from animal to animal. So, to take this examination further, the team wanted to see whether this variation impacted swimming speed or efficiency. Both efficiency and speed appear to increase with more nectophores, but seem to hit a plateau at around 12.  

This system of propulsion lets the Namonia go about the ocean at similar rates to many fish (judged by speed in context of body length), but without the high metabolic cost of operating a neuromuscular system. 

[Related: This tiny AI-powered robot is learning to explore the ocean on its own]

So, how could this sea creature help inform the design of vehicles that travel beneath the waves? Caltech’s John Dabiri, one of the authors on the paper, has long been a proponent of taking inspiration from the fluid dynamics of critters like jellyfish to fashion aquatic vessels. And while the researchers in this paper do not suggest a specific design for a propulsion system for underwater vehicles, they do note that the behavior of these animals may offer helpful guidelines for engines that operate through multiple jets. “Analogously to [Namonia] bijuga, a single underwater vehicle with multiple propulsors could use different modes to adapt to context,” the researchers wrote in the paper. 

Simple changes in the timing of how the jets fire, or which jets fire together, can have a big impact on the energy efficiency and speed of a vehicle. For example, if engineers wanted to make a system that doesn’t need a lot of power, then it might be helpful to have jets that could be controlled independently. If the vehicle needs to be fast, then there needs to be a function that can operate all engines from one side at the same time.

“For underwater vehicles with few propulsors, adding propulsors may provide large performance benefits,” the researchers noted, “but when the number of propulsors is high, the increase in complexity from adding propulsors may outweigh the incremental performance gains.”

Learn more about Nanomia and watch it freestyle below: 

The post What this jellyfish-like sea creature can teach us about underwater vehicles of the future appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Meta’s new AI can use deceit to conquer a board game world https://www.popsci.com/technology/meta-ai-bot-diplomacy/ Tue, 22 Nov 2022 20:00:00 +0000 https://www.popsci.com/?p=490109
map of europe and a globe
Aslı Yılmaz / Unsplash

It can play Diplomacy better than many humans. Here's how it works.

The post Meta’s new AI can use deceit to conquer a board game world appeared first on Popular Science.

]]>
map of europe and a globe
Aslı Yılmaz / Unsplash

Computers are getting pretty good at a growing roster of arcade and board games, including chess, Go, Pong, and Pac-Man. Machines might even change how video games get developed in the not-so-distant future. Now, after building an AI bot that outbluffs humans at poker, scientists at Meta AI have created a program capable of even more complex gameplay: one that can strategize, understand other players’ intentions, and communicate or negotiate plans with them through chat messages.  

This bot is named ​​CICERO, and it can play the game Diplomacy better than many human players. CICERO more than doubled the average score of its human opponents and placed in the top 10 percent of players across 40 games in an online league.

The program has been a work in progress for the past three years between engineers at Meta, and researchers from Columbia, MIT, Stanford, Carnegie Mellon University, UC Berkeley, and Harvard. A description of how the CICERO came together was published in a paper today in Science. The team is open sourcing the code and the model, and they will be making the data used in the project accessible to other researchers. 

Diplomacy is originally a board game set in a stylized version of Europe. Players assume the role of different countries, and their objective is to gain control of territories by making strategic agreements and plans of action. 

“What sets Diplomacy apart is that it involves cooperation, it involves trust, and most importantly, it involves natural language communication and negotiation with other players,” says Noam Brown, a research scientist at Meta AI and an author on the paper. 

Although a special version of the game without the chat function has been used to test AI over the years, the progress with language models from 2019 onwards made the team realize that it might be possible to teach an AI how to play Diplomacy in full. 

But because Diplomacy had this unique requirement for collaboration, “a lot of the techniques that have been used for prior games just don’t apply anymore,” Brown explains. 

Previously, the team had run an experiment with the non-language version of the game, where players were specifically informed that in each game there would be one bot and six humans. “What we found is that the players would actively try to figure out who the bot was, and then eliminate that player,” says Brown. “Fortunately, our bot was able to pass as a human in that setting; they actually had a lot of trouble figuring out who the bot was, so the bot actually got first place in the league.” 

But with the full game of Diplomacy, the team knew that the bot wasn’t ready to pass the Turing test if natural language interrogations were involved. So during the experiment, players were not told that they were playing with a bot—a detail that was only revealed after the game ended. 

Making CICERO

To construct the Diplomacy-playing AI, the team built two separate data processing engines that fed into one another: one engine for dialogue (inspired by models like GPT-3, BlenderBot 3, LaMDA, and OPT-175B), and another for strategic reasoning (inspired by previous work like AlphaGo and Pluribus). Combined together, the dialogue model, which was trained on a large corpus of text data from the internet and 50,000 human games from webDiplomacy.net, can communicate and convey intents that are in line with its planned course of action. 

AI photo
Meta AI

This works in the reverse direction as well. When other players communicate to the bot, the dialogue engine can translate that into plans and actions in the game, and use that to inform the strategy engine about next steps. CICERO’s grand plans are formulated by a strategic reasoning engine that estimates the best next move based on the state of the board, the content of the most recent conversations, moves that were made historically by players in a similar situation, and the bot’s goals. 

[Related: MIT scientists taught robots how to sabotage each other]

“Language models are really good these days, but they definitely have their shortcomings. The more strategy that we can offload from the language model, the better we can do,” Brown says. “For that reason, we have this dialogue model that conditions on the plans, but the dialogue model is not responsible for the plans.” So, the part of the program that does the talking is not the same as the part that does the planning.

The planning algorithm the bot uses is called piKL. It will make an initial prediction of what everyone is likely to do and what everyone thinks the bot will do, and refine this prediction by weighing the values of different moves. “When doing this iterative process, it’s trying to weigh what people have done historically given the dataset that we have,” says Brown. “It’s also trying to balance that with the understanding that players have certain objectives in this game, they’re trying to maximize their score and they’re going to not do very serious mistakes as they would minor mistakes. We’ve actually observed that this models humans much better than just doing the initial prediction based on human data.”

AI photo
Meta AI

“Deception exists on a spectrum” 

Consider the concept of deception, which is an interesting aspect of Diplomacy. In the game, before each turn, players will spend 5 to 15 minutes talking to each other and negotiating plans. But since this is all all happening in private, people can double deal. They can make promises to one person, and tell another that they’ll do something else. 

But just because people can be sneaky doesn’t mean that’s the best way to go about the contest. “A lot of people when they start playing the game of Diplomacy they view it as a game about deception. But actually if you talk to experienced Diplomacy players, they think with a very different approach to the game, and they say it’s a game about trust,” Brown says. “It’s being able to establish trust with other players in an environment that encourages you to not trust anybody. Diplomacy is not a game where you can be successful on your own. You really need to have allies.” 

Early versions of the bot were more outright deceptive, but it actually ended up doing quite poorly. Researchers then went in to add filters to make it lie less, leading to to much better performances. But of course, CICERO is not always fully honest with all of its intentions. And importantly, it understands that other players may also be deceptive. “Deception exists on a spectrum, and we’re filtering out the most extreme forms of deception, because that’s not helpful,” Brown says. “But there are situations where the bot will strategically leave out information.”

For example, if it’s planning to attack somebody, it will omit the parts of its attack plan in its communications. If it’s working with an ally, it might only communicate the need-to-know details, because exposing too much of its goals might leave it open to being backstabbed. 

“We’re accounting for the fact that players do not act like machines, they could behave irrationally, they could behave suboptimally. If you want to have AI acting in the real world, that’s necessary to have them understand that humans are going to behave in a human-like way, not in a robot-like way,” Brown says. “Having an agent that is able to see things from other perspectives and understand their point of view is a pretty important skillset going forward in human-AI interactions.” 

Brown notes that the techniques that underlie the bot are “quite general,” and he can imagine other engineers building on this research in a way that leads to more useful personal assistants and chatbots.

The post Meta’s new AI can use deceit to conquer a board game world appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How Spotify trained an AI to transcribe music https://www.popsci.com/technology/spotify-basic-pitch/ Mon, 21 Nov 2022 23:00:00 +0000 https://www.popsci.com/?p=489793
sheet music on piano
Parsoa Khorsand / Unsplash

Basic Pitch, an open-source tool on the web, can take sound recordings and turn them into a computer-recognizable MIDI score.

The post How Spotify trained an AI to transcribe music appeared first on Popular Science.

]]>
sheet music on piano
Parsoa Khorsand / Unsplash

Before electronic music became an umbrella category for a distinct genre of modern music, the term referred to a technique for producing music that involved transfering audio made by real-life instruments into waveforms that could be recorded on tapes, or played through amps and loudspeakers. During the early to mid-1900s, special electronic instruments and music synthesizers—machines hooked up to computers that can electronically generate and modify sounds from a variety of instruments—started becoming popular. 

But there was a problem: almost every company used their own computer programming language to control their digital instruments, making it hard for musicians to pull together different instruments made by different manufacturers. So, in 1983, the industry came together and created a communications protocol called musical instrument digital interface, or MIDI, to standardize how external audio sources transmit messages to computers, and vice versa.

MIDI works like a command that tells the computer what instrument was played, what notes were played on the instrument, how loud and how long it was played for, and with which effects if any. The instructions cover the individual notes of individual instruments, and allow for the sound to be accurately played back. When songs are stored as MIDI files instead of a regular audio file (like mp3 or CD), musicians can easily edit the tempo, key, and instrumentation of the track. They can also take out individual notes, entire instrument sections, change the instrument type, or duplicate a main vocal track and turn it into a harmony. Because MIDI keeps track of what notes get played at what times by what instruments, it is essentially a digital score, and softwares like Notation Player can effortlessly transcribe MIDI files into sheet music. 

[Related: Interface The Music: An Introduction to Electronic Instrument Control] 

Although MIDI is convenient for a lot of reasons, it usually requires musicians to have some sort of interface, like a MIDI controller keyboard, or knowledge on how to program notes by hand. But a tool made publicly available by engineers from Spotify and Soundtrap this summer, called Basic Pitch, promises to simplify this process, and open up this tool for musicians who lack specialty gear or coding experience. 

“Similar to how you ask your voice assistant to identify the words you’re saying and also make sense of the meaning behind those words, we’re using neural networks to understand and process audio in music and podcasts,” Rachel Bittner, a Spotify scientist who worked on the project, said in a September blog post. “This work combines our ML research and practices with domain knowledge about audio—understanding the fundamentals of how music works, like pitch, tone, tempo, the frequencies of different instruments, and more.”

Bittner envisions that the tool can serve as a “starting point” transcription that artists can make in the moment that saves them the trouble of writing out notes and melodies by hand. 

This open source tool uses machine learning to convert any audio into MIDI format. See it in action here

[Related: Why Spotify’s music recommendations always seem so spot on]

Previous research into this space has made the process of building this model easier, to an extent. There are devices called Disklaviers that record real-time piano performances and store it as a MIDI file. And, there are many audio recordings and paired MIDI files that researchers can use to create algorithms. “There are other tools that do many parts of what Basic Pitch does,” Bittner said in the podcast NerdOut@Spotify. “What I think makes Basic Pitch special is that it does a lot of things all in one tool, rather than having to use different tools for different types of audio.” 

Additionally, an advantage it offers over other note-detection systems is that it can track multiple notes from more than one instrument simultaneously. So, it can transcribe voice, guitar, and singing all at once (here’s a paper the team published this year on the tech behind this). Basic Pitch can also support sound effects like vibrato (a wiggle on a note), glissando (sliding between two notes), bends (fluctuations in pitch), as well, thanks to a pitch bending detection mechanism. 

To understand the components in the model, here are some basic things to know about music: Perceived pitch is the fundamental frequency, otherwise known as the lowest frequency of a vibrating object (like a violin string or a vocal chord). Music can be represented as a bunch of sine waves, and each sine wave has its own particular frequency. In physics, most sounds we hear as pitched have other tones harmonically spaced above it. The hard thing that pitch tracking algorithms have to do is to wrap all the extra pitches down into a main one, Bittner noted. The team used something called a harmonic constant-Q transform to model the structure in pitched sound by harmonic, frequency, and time. 

The Spotify team wanted to make the model fast and low-energy, so it had to be less computationally expensive and make fewer inputs go further. That means the machine learning model itself had to have simple parameters and few layers. Basic Pitch is based on a convolutional neural network (CNN) that has less than 20 MB peak memory and fewer than 17,000 parameters. Interestingly, CNNs were one of the first models that were known to be good at detecting images. For this product, Spotify trained and tested its CNN on a variety of open datasets for vocals, acoustic guitar, piano, synthesizers, orchestra, across many music genres. “In order to allow for a small model, Basic Pitch was built with a harmonic stacking layer and three types of outputs: onsets, notes, and pitch bends,” Spotify engineers wrote in a blog post

[Related: Birders behold: Cornell’s Merlin app is now a one-stop shop for bird identification]

So what is the benefit of using machine learning for a task like this? Bittner explained in the podcast that they could build a simple representation of pitch by using audio clips of one instrument played in one room on one microphone. But machine learning allows them to discern similar underlying patterns even when they have to work with varying instruments, microphones, and rooms. 

Compared to a 2020 multi-instrument automatic music transcription model trained on data from MusicNET, Basic Pitch had a higher accuracy when it came to detecting notes. However, Basic Pitch performed worse compared to models trained to detect notes from specific instruments, like guitar and piano. Spotify engineers acknowledge that the tool is not perfect, and they are eager to hear feedback from the community and see how musicians use it.

Curious to see how it works? Try it out here—you can record sounds directly on the web portal or upload an audio file.

The post How Spotify trained an AI to transcribe music appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How engineers taught a manta ray-inspired robot the butterfly stroke https://www.popsci.com/technology/butterfly-bot-ncstate/ Fri, 18 Nov 2022 19:00:00 +0000 https://www.popsci.com/?p=488751
manta ray-inspired swimming robot
Yin Lab@NCSU / YouTube

The engineers behind it claim this design allows the robot to be lighter, faster, and more energy efficient.

The post How engineers taught a manta ray-inspired robot the butterfly stroke appeared first on Popular Science.

]]>
manta ray-inspired swimming robot
Yin Lab@NCSU / YouTube

Making a robot that can swim well can be surprisingly difficult. Part of this is due to the fact that the physics of how organisms move in the water is often complicated, and hard to replicate. But that hasn’t stopped researchers from studying how ocean animals move so they can create better aquatic robots. 

A notable addition to this field comes from engineers at North Carolina State University, who came up with a manta ray-like robot that can do the butterfly stroke. A detailed description of their design is published this week in the journal Science Advances

“To date, swimming soft robots have not been able to swim faster than one body length per second, but marine animals—such as manta rays—are able to swim much faster, and much more efficiently,” Jie Yin, an associate professor at NC State University, and an author on the paper, said in a press release. “We wanted to draw on the biomechanics of these animals to see if we could develop faster, more energy-efficient soft robots.” 

[Related: A tuna robot reveals the art of gliding gracefully through water

As a result, the team put together two versions of a silicon “butterfly bot”: one that can reach average speeds of 3.74 body lengths per second, and another that can turn sharply to the left or right. Both are about the size of a human palm.

Unlike similar biology-inspired robot concepts in the past that use motors to directly operate the wings, the NC State team’s robot flaps with a “bistable” wing that snaps into two distinct positions like a hair clip, or a pop-up jumping toy. To alter the position of the curved, rotating wings, researchers used a tether to pump air into upper and lower chambers of the robot body. When the chambers inflate and deflate, the body bends up and down, making the wings snap back and forth. 

As the robots were tested in the aquarium, researchers saw that inflating the top chamber caused the soft body to bend upward, inducing a downstroke motion. Deflating that and inflating the bottom pneumatic chamber caused the body to bend downward, inducing an upstroke with the wings. During the upstroke-to-downstroke transition, the robot body is pushed deep into the water and then propelled forward. This design allows the soft robot to be lighter and more energy efficient. 

[Related: This amphibious drone hitchhikes like a suckerfish]

The first version of butterfly bot was built for speed. It holds a single drive unit that controls both of its wings. The second version was built for maneuverability. It has two connected drive units that let each wing be controlled independently. Flapping only one wing allows it to turn more easily. 

“This work is an exciting proof of concept, but it has limitations,” Yin said. “Most obviously, the current prototypes are tethered by slender tubing, which is what we use to pump air into the central bodies. We’re currently working to develop an untethered, autonomous version.”

Watch butterfly bot in action, below:

The post How engineers taught a manta ray-inspired robot the butterfly stroke appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Self-driving cars are turning into hyperlocal weather stations https://www.popsci.com/technology/waymo-self-driving-car-weather-station/ Tue, 15 Nov 2022 20:00:00 +0000 https://www.popsci.com/?p=487342
waymo self-driving vehicle on the road
Waymo

Waymo wants to have an unobstructed view of the road even in inclement conditions.

The post Self-driving cars are turning into hyperlocal weather stations appeared first on Popular Science.

]]>
waymo self-driving vehicle on the road
Waymo

Weather phenomena can impact our travels, and self-driving cars are no different from us in that sense. Inclement conditions can create challenges for autonomous vehicles. 

Reflections from wet roads can confuse cameras; and dirt and condensation from fog and mist can disrupt sensors, making it hard for cars to accurately perceive the world around them. Additionally, snow and rain have an effect on the frictions of the tires and impact how cars drive. 

Waymo, the self-driving car company from Google’s parent Alphabet, has a plan for keeping its path clear. It involves using what it calls the Waymo Driver—the core technologies steering its autonomous vehicles—as mobile weather stations, helping the company understand the detailed conditions in which their vehicles are operating and hopefully make better decisions on the road. 

Weather data is usually gathered through an array of sources like stations on the ground, radars, balloons, satellites, and even robots. Computers then get to work putting together these data points into models for weather forecasts.

[Related: A major player in the AV space is hitting the brakes—here’s why]

But the structures and locations of cities can make matters complicated. In urban environments like New York City, the sprawl of buildings and local greenery can alter the local weather from block to block. And in San Francisco, the notoriously mercurial fog can influence microclimates from one neighborhood to the next as it materializes and disintegrates. 

Last year, to make it easier to get around in San Francisco’s foggy streets, Waymo engineers deployed updates to their radar, which uses microwaves to measure velocity and see through misty conditions. 

They also built a sensor cleaning system that keeps surfaces clear of obstructions from droplets and road grime. Lastly, they added new horn-like devices to the top of their vehicles that function like mobile weather stations and can collect data on fog, including the density of the droplets. This updated suite of sensors allow the vehicle to detect the local weather, or microclimate, around itself and adjust driving behaviors accordingly. 

[Related: A decked out laser truck is helping scientists understand urban heat islands]

In a blog published Monday, Waymo engineers explain that by integrating data from window conditions (whether there’s raindrops on it) and the cameras, radar and lidar on the vehicle with information from weather visibility sensors, they’ve been able to devise a new “quantitative metric about meteorological visibility” that Waymo Driver can employ to “generate estimates about the current weather and environmental states it’s operating under.”

This tech has already been put to work in a fog map Waymo made for San Francisco. The company said that it intends to make “similar weather maps for additional cities” in the future.  

The post Self-driving cars are turning into hyperlocal weather stations appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
IBM’s biggest quantum chip yet could help solve the trickiest math problems https://www.popsci.com/technology/ibm-quantum-summit-osprey/ Thu, 10 Nov 2022 20:00:00 +0000 https://www.popsci.com/?p=486223
new flex wiring in the cryogenic quantum computer infrastructure
The new flex wiring in IBM's quantum cryostat. Connie Zhou / IBM

The company figured out how to fit three times more qubits on a quantum computer chip. Here's what's next.

The post IBM’s biggest quantum chip yet could help solve the trickiest math problems appeared first on Popular Science.

]]>
new flex wiring in the cryogenic quantum computer infrastructure
The new flex wiring in IBM's quantum cryostat. Connie Zhou / IBM

Quantum computers, though delicate and finicky in their present forms, promise to excel at a specific set of tasks. For one, they should be able to solve certain types of math problems involving linear algebra faster and more accurately than classical computers, due to a quirk in their design. 

Classical computers have binary switches that represent information as either zero or one. The quantum equivalent of that, called qubits, can represent information as one, zero, or a combination of the two. That’s because instead of having bits that store either on-or-off states, qubits store waveforms. 

In the quantum field, IBM’s researchers have been hard at work, updating their suite of hardware and software inside a device that aims to solve problems deemed difficult or impossible for the best classical computers available today. 

In May, IBM unveiled an ambitious roadmap towards making quantum computing both more powerful and more practical. At the IBM Summit this week, the company announced the checkpoints they’ve hit so far, including a newly completed 433-qubit processor called Osprey, and updated versions of their quantum software. 

Engineering photo
Jerry Chow and Jay Gambetta presenting the Osprey processor inside a printed circuit board at the IBM Summit. Charlotte Hu

For these devices to maintain their quantum properties, they need to be kept at very cold temperatures using specialized fridges and cooling systems. IBM shared a progress report on where they are in assembling the different parts that will comprise their 2023 cryogenic fridge infrastructure, which is called System Two. It will be responsible for containing all the quantum computing equipment and keeping it stable. 

[Related: IBM’s massive ‘Kookaburra’ quantum processor might land in 2025]

IBM’s quantum chips are named after birds. Osprey, which is almost three times larger than a previous 127-qubit chip called Eagle, uses many of the same technologies and designs, like a hexagon lattice structure on the chip surface that holds all the qubits. But 400 qubits can be a lot to manage, so engineers are constantly experimenting with fabrication techniques or small changes in design to make the processors less noisy and more efficient. 

“Everytime you need to engineer it to be 3x, there’s things that come into play to make sure it works,” says Jerry Chow, director of quantum hardware system development at IBM Quantum. “With the Osprey, a lot of it comes down to further developing and scaling the multi-level wiring that was common in the Eagle stack, but also ways of optimizing it in order to pack more qubits together and route them together.”

One way of doing this is by tweaking a structural component in the quantum computer: Replacing the hand-crafted coaxial cables with a ribbon-like higher density cabling called flex wiring reduces the overall size footprint (and cost). These ribbons can also be pancaked and stacked together in a staircase fashion to connect the different plates of the fridges. This change can increase the amount of signals that can get relayed.

Along with new processors, the team announced new generations of control electronics, and improvements to older chips like Eagle and Falcon. 

One metric IBM has been trying to improve across all its chips is the coherence time of the qubits. Coherence time refers to how long the qubits stay in their wave-like quantum state. Qubits can lose this property if there’s too much noise interference from other qubits and the environment, which can result in decoherence. That messes with the results of calculations. The coherence time, how long it takes to do a quantum gate, and the number of qubits available are factors that determine the performance of a chip. They also set a limit on the size of problems that a certain device can take on. 

[Related: In photos: Journey to the center of a quantum computer]

Right now, the median coherence time for Osprey is around 70 to 80 microseconds. Chow and his colleagues have been boosting the coherence times of qubits on earlier chips like Eagle and Falcon by a factor of three. 

“Falcon increased by a factor of three to around 300 to 400 microseconds. Eagle, last year when we released it, it was in the 90 microsecond to 100 microsecond range,” Chow says. “Now, we have our Eagle revision three which can also hit 300 microseconds of coherence times.” 

The second revision of Osprey, which is already being put together, shows a similar improvement in its coherence times. 

IBM hasn’t released Osprey yet to any clients. “It’s still in the phase of working out how well it works, characterizing it,” Chow says. “We certainly have coherence times and some of the more fundamental aspects of it, but we’re still going to work on bringing it up in a full system over the next few months leading towards client deployment early next year.” 

The summit was also an opportunity to demonstrate all the applications that IBM’s partners have found so far with quantum technology, including using it for detecting gravitational waves, finding fraud in card payment data, calculating weather-related risks for energy storage problems, and simulating the properties of molecules to design new materials.

The post IBM’s biggest quantum chip yet could help solve the trickiest math problems appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This AI can harness sound to reveal the structure of unseen spaces https://www.popsci.com/technology/neural-acoustic-field-model/ Wed, 09 Nov 2022 20:00:00 +0000 https://www.popsci.com/?p=485701
a stage filled with lights and music equipment
Deposit Photos

It's called a neural acoustic field model, and it can also consider what noises would sound like as you traveled through virtual reality.

The post This AI can harness sound to reveal the structure of unseen spaces appeared first on Popular Science.

]]>
a stage filled with lights and music equipment
Deposit Photos

Imagine you’re walking through a series of rooms, circling closer and closer to a sound source, whether it’s music playing from a speaker or a person talking. The noise you hear as you move through this maze will distort and fluctuate based on where you are. Considering a scenario like this, a team of researchers from MIT and Carnegie Mellon University have been working on a model that can realistically depict how the sound around a listener changes as they move through a certain space. They published their work on this subject in a new preprint paper last week. 

The sounds we hear in the world can vary depending on factors like what type of spaces the sound waves are bouncing off of, what material they’re hitting or passing through, and how far they need to travel. These characteristics can influence how sound scatters and decays. But researchers can reverse engineer this process as well. They can take a sound sample, and even use that to deduce what the environment is like (in some ways, it’s like how animals use echolocation to “see”).

“We’re mostly modeling the spatial acoustics, so the [focus is on] reverberations,” says Yilun Du, a graduate student at MIT and an author on the paper. “Maybe if you’re in a concert hall, there are a lot of reverberations, maybe if you’re in a cathedral, there are many echoes versus if you’re in a small room, there isn’t really any echo.”

Their model, called a neural acoustic field (NAF), is a neural network that can account for the position of both the sound source and listener, as well as the geometry of the space through which the sound has traveled. 

To train the NAF, researchers fed it visual information about the scene and a few spectrograms (visual pattern representation that captures the amplitude, frequency, and duration of sounds) of audio gathered from what the listener would hear at different vantage points and positions. 

“We have a sparse number of data points; from this we fit some type of model that can accurately synthesize how sound would sound like from any location position from the room, and what it would sound like from a new position,” Du says. “Once we fit this model, you can simulate all sorts of virtual walk-throughs.”

The team used audio data obtained from a virtually simulated room. “We also have some results on real scenes, but the issue is that gathering this data in the real world takes a lot of time,” Du notes. 

Using this data, the model can learn to predict how the sounds the listener hears would change if they moved to another position. For example, if music was coming from a speaker at the center of the room, this sound would get louder if the listener walked closer to it, and would become more muffled if the listener walked into another room. The NAF can also use this information to predict the structure of the world around the listener. 

One big application of this type of model is in virtual reality, so that sounds could be accurately generated for a listener moving through a space in VR. The other big use he sees is in artificial intelligence. 

“We have a lot of models for vision. But perception isn’t just limited to vision, sound is also very important. We can also imagine this is an attempt to do perception using sound,” he says. 

Sound isn’t the only medium that researchers are playing around with using AI. Machine learning technology today can take 2D images and use them to generate a 3D model of an object, offering different perspectives and new views. This technique comes in handy especially in virtual reality settings, where engineers and artists have to architect realism into screen spaces. 

Additionally, models like this sound-focused one could enhance current sensors and devices in low light or underwater conditions. “Sound also allows you to see across corners. There’s a lot of variability depending on lighting conditions. Objects look very different,” Du says. “But sound kinda bounces the same most of the time. It’s a different sensory modality.”

For now, a main limitation to further development of their model is the lack of information. “One thing that was surprisingly difficult was actually getting data, because people haven’t explored this problem that much,” he says. “When you try to synthesize novel views in virtual reality, there’s tons of datasets, all these real images. With more datasets, it would be very interesting to explore more of these approaches especially in real scenes.”

Watch (and listen to) a walkthrough of a virtual space, below:

The post This AI can harness sound to reveal the structure of unseen spaces appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Google’s AI has a long way to go before writing the next great novel https://www.popsci.com/technology/google-ai-wordcraft/ Thu, 03 Nov 2022 23:00:00 +0000 https://www.popsci.com/?p=484107
person's hand on typewriter
David Klein / Unsplash

It’s a work in progress.

The post Google’s AI has a long way to go before writing the next great novel appeared first on Popular Science.

]]>
person's hand on typewriter
David Klein / Unsplash

Artificial intelligence has come a long way since the 1950s, and it has taken on an impressive array of tasks. It can solve math problems, detect natural disasters, identify different living organisms, pilot ships and more. But for tech giants like Google and Meta, one of their holy grails is formulating an AI that can understand language the way that humans do (a quest that at times, comes with its own set of conflicts). 

A key test for language models is writing—an exercise that many people struggle with as well. Google engineers designed a proof-of-concept experiment called Wordcraft that used its language model LaMDA to write fiction. The tool was first built two years ago and is still far from becoming a publicly usable product. 

So, what exactly is Wordcraft? And what can it do? Google describes it as “an AI-powered text editor centered on story writing” that can act as a kind of assistant to help authors brainstorm ideas or overcome writer’s block. To gauge where Wordcraft can fit into the creative process, Google recruited 13 English-language writers to use the tool to construct stories—here’s what they came up with

Writers can give Wordcraft prompts like what type of story they want (such as mystery), and what they want the story to be about (say, fishermen). They can also ask the model to follow up on their thoughts, describe certain scenes, create characters, rewrite phrases to be more funny or more sad, and refine or replace certain words. Wordcraft can also respond to more “freeform prompts,” like explaining why someone is doing something. Since LaMDA is a conversational AI, Wordcraft features a chatbot that writers can communicate with about how they want the story to go. (More about the controls in Wordcraft can be found in the team’s two whitepapers). 

AI photo
Google AI

These models have learned information from the open web, and writers can experiment with the instructions to have it give them back what they want. “The authors agreed that the ability to conjure ideas ‘out of thin air’ was one of the most compelling parts of co-writing with an AI model. While these models may struggle with consistency and coherence, they excel at inventing details and elaboration,” Google engineers wrote in a blog post about Wordcraft. 

Many of these details end up being quite surreal, since the model lacks direct knowledge of the physical world. It’s more like rolling a die on randomly related internet searches. “For instance, Ken Liu asked the model to ‘give a name to the syndrome where you falsely think there’s a child trapped inside an ATM.’ (the model’s answer: ‘Phantom Rescue Syndrome’),” Google engineers noted in the blog. 

[Related: Researchers used AI to explain complex science. Results were mixed.]

In the past few years, AIs have been used to write screenplays, news articles, novels, and even science papers. But these models are still filled with flaws, and are constantly evolving. There are still risks associated with them, one of the biggest being that even though they can write passably like humans, they don’t truly understand what they’re saying. And importantly, they cannot operate completely independently yet. 

Douglas Eck, senior research director at Google Research, noted at a recent Google event focused on AI, that Wordcraft can enhance stories but cannot write whole stories. Presently, the tool is geared towards fiction because in its current mode, it can miss context or mix up details. It can only generate new content based on the previous 500 words. 

Additionally, many writers have complained that the writing style of Wordcraft is quite basic. The sentences it constructs tend to be simple, straightforward, and monotone. It can’t really mimic the style or voice of prose. And because the model is biased towards non-toxic content on the web, it’s reluctant to say mean things, which actually can be a shortcoming: sometimes that’s needed to make conflict. As it’s trained on the internet, it tends to gravitate towards tropes, which makes stories less unique and original. “For example, Nelly Garcia noted the difficulty in writing about a lesbian romance — the model kept suggesting that she insert a male character or that she have the female protagonists talk about friendship,” Google engineers wrote. 

Daphne Ippolito, one of the researchers on the Wordcraft team, suggested that adding parameter efficient tuning, which they can customize and implement on top of their current model, could potentially help them generate different writing styles, like Shakespeare. But whether it can clearly mock up the subtle style differences between two Victorian-era writers, like Charles Dickens and Charlotte Brontë, is a question for further exploration. (Interestingly enough, Ippolito has worked on a separate project called Real or Fake text, which asks users to distinguish between AI versus human writing for recipes, news articles, and short stories.) 

Ippolito also says that Wordcraft might not end up being the best model for a writer’s assistant. How they design or modify the AI can vary depending on what the writer wants help with—whether it’s plot, characters, fantasy geography, or story outline. 

The post Google’s AI has a long way to go before writing the next great novel appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Meta’s AI could shake up how we study protein structures https://www.popsci.com/technology/meta-ai-protein-folding-prediction/ Tue, 01 Nov 2022 19:00:00 +0000 https://www.popsci.com/?p=483144
protein structure 3D
Argonne National Lab / Flickr

It is using language learning models to predict how proteins fold.

The post Meta’s AI could shake up how we study protein structures appeared first on Popular Science.

]]>
protein structure 3D
Argonne National Lab / Flickr

Proteins are an essential part of keeping living organisms up and running. They help repair cells, clear out waste, and relay correspondences from one end of the body to the other. 

There’s been a great deal of work among scientists to decipher structures and functions of the proteins, and to this end, Meta’s AI research team announced today that they have used a model that can predict the 3D structure of proteins based on their amino acid sequences. Unlike previous work in the space, such as DeepMind’s, Meta’s AI is based on a language learning model rather than a shape-and-sequence matching algorithm. Meta is not only releasing its preprint paper on this research, but will be opening up both the model and the database of proteins to the research community and industry. 

First, to contextualize the importance of understanding protein shapes, here’s a brief biology lesson. Certain triplet sequences of nucleotides from genes are translated by a molecule in the cell called a ribosome into amino acids. Proteins are chains of amino acids that have assorted themselves into unique forms and configurations. An emerging field of science called metagenomics is using gene sequencing to discover, catalog, and annotate new proteins in the natural world. 

Meta’s AI model is a new protein-folding approach inspired by large language models that aims to predict the structures of the hundreds of millions of protein sequences in metagenomics databases. Understanding the shapes that these proteins form will give researchers clues about how they function, and what molecules they interact with. 

[Related: Meta thinks its new AI tool can make Wikipedia more accurate]

“We’ve created the first large-scale characterization of metagenomics proteins. We’re releasing the database as an open science resource that has more than 600 million predictions of protein structures,” says Alex Rives, a research scientist at Meta AI. “This covers some of the least understood proteins out there.” 

Historically, computational biologists have used evolutionary patterns to predict the structures of proteins. Proteins, before they’re folded, are linear strands of amino acids. When the protein folds into complex structures, certain sequences that may appear far apart in the linear strand could suddenly be very close to one another. 

Protein language modeling from Meta AI
Meta AI

“You can think about this as two pieces in a puzzle where they have to fit together. Evolution can’t choose these two positions independently because if the wrong piece is here, the structure would fall apart,” Rives says. “What that means then is if you look at the patterns of protein sequences, they contain information about the folded structure because different positions in the sequence will co-vary with each other. That will reflect something about the underlying biological properties of the protein.” 

Meanwhile, DeepMind’s innovative approach, which first debuted in 2018, relies chiefly on a method called multiple sequence alignment. It basically performs a search over massive evolutionary databases of protein sequences to find proteins that are related to the one that it’s making a prediction for. 

“What’s different about our approach is that we’re making the prediction directly from the amino acid sequence, rather than making it from this set of multiple related proteins and looking at the patterns,” Rives says. “The language model has learned these patterns in a different way. What this means is that we can greatly simplify the structure prediction architecture because we don’t need to process this set of sequences and we don’t need to search for related sequences.” 

These factors, Rives claims, allow their model to be speedier compared to other technology in the field. 

[Related: Meta wants to improve its AI by studying human brains]

How did they train this model to be able to do this task? It took two steps. First, they had to pre-train the language model across a large number of proteins that have different structures, come from different protein families, and are taken all across the evolutionary timeline. They used a version of the Masked Language Model, where they blanked out portions of the amino acid sequence and asked the algorithm to fill in those blanks. “The language training is unsupervised learning, it’s only trained on sequences,” Rives explains. “Doing this causes this model to learn patterns across these millions of protein sequences.”  

Then, they froze the language model and trained a folding module on top of it. In the second stage of training, they use supervised learning. The supervised learning dataset is made up of a set of structures from the protein databank that researchers from across the world have submitted. That is then augmented with predictions made using AlphaFold (DeepMind’s technology). “This folding module takes the language model input and basically outputs the 3D atomic coordinates of the protein [from the amino acid sequences].” Rives says. “That produces these representations and those are projected out into the structure using the folding head.” 

Rives imagines that this model could be used in research applications such as understanding the function of a protein’s active site at the biochemical level, which is information that could be very pertinent for drug development and discovery. He also thinks that the AI could even be used to design new proteins in the future.

The post Meta’s AI could shake up how we study protein structures appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Watch this penny-sized soft robot paddle with hydrogel fins https://www.popsci.com/technology/hydrogel-aquabot/ Sat, 29 Oct 2022 11:00:00 +0000 https://www.popsci.com/?p=481993
hydrogel swimming robot
Science Robotics

Researchers believe that this robot design can be one day repurposed for applications like biomedical devices.

The post Watch this penny-sized soft robot paddle with hydrogel fins appeared first on Popular Science.

]]>
hydrogel swimming robot
Science Robotics

Engineers have long been interested in making water-faring robots. We’ve seen robots fashioned after tuna, suckerfish, octopuses and more. Now a new type of aquabot is swimming onto the scene. Researchers from Korea University and Ajou University designed insect-sized robots that can wade through the water employing hydrogel fins and paddles. They describe the process behind making and operating these robots in a new paper out this week in Science Robotics

Hydrogels are 3D structures made from crosslinked molecules. They can be made from synthetic or natural materials and tend to swell in water. Some hydrogels can even change shape in response to external stimuli like variations in pH, temperature, ionic strength, solvent type, electric and magnetic fields, light, and more. 

The medical industry has been exploring how to use hydrogels for applications such as wound dressing. But robot engineers have also been interested in using hydrogel to make soft robots—just check out this nifty drug delivering jellyfish-like aquabot from 2008. Of course, the design of such robots are always being reimagined and optimized.

[Related: A tuna robot reveals the art of gliding gracefully through water]

The new, free-floating bots from Korea University and Ajou University team have porous hydrogel paddles that are coated with a webbing of nanoparticles, or “wrinkled nanomembrane electrodes.” Onboard electronics were shrunk down to the size of a penny into the body of the robot. Inside the body is an actuator, or motor, that can apply different voltages of electric potential across the hydrogel. To move the robot, researchers can alter an external electric field to induce electroosmosis-driven hydraulic pumping—where charged surfaces can change the water flow around it. 

“In particular, our soft aquabots based on WNE actuators could be constructed without any high-voltage converter and conventional transmission system, which have considerably limited the miniaturization of soft robots,” the researchers wrote. “However, to be a fully autonomous robotic system, sensing components should be further integrated for the recognition of position and orientation of the robot. We believe that our approach could provide a basis for developing lightweight, high-performance soft actuators and robots at small scale that require a variety of motions under electric stimuli.”

Watch the tiny aquabot in motion below:

The post Watch this penny-sized soft robot paddle with hydrogel fins appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Shutterstock and OpenAI have come up with one possible solution to the ownership problem in AI art https://www.popsci.com/technology/shutterstock-ai-image-generator/ Wed, 26 Oct 2022 23:00:00 +0000 https://www.popsci.com/?p=481330
paintings on a red wall

Conversations have been brewing around how artists get compensated for AI-generated works.

The post Shutterstock and OpenAI have come up with one possible solution to the ownership problem in AI art appeared first on Popular Science.

]]>
paintings on a red wall

Gone may be the days of culturally viral stock photoshoots and accidental stock photo celebrities. The stock photos of the future may not require human models, or even human photographers. On October 25, Shutterstock announced that it was expanding its partnership with OpenAI to integrate the image generation capabilities of DALL-E into the Shutterstock platform. 

Shutterstock, an online marketplace for visual imagery that has built up a wide-spanning collection of licensed photographs, vectors, illustrations, 3D models, videos and music, has been selling its products and metadata to OpenAI as the company built and refined DALL-E since 2021.

Paul Hennessy, Shutterstock’s CEO, told TechCrunch that the company has had a “long history of integrating AI into every part of our business.” This much is true. In 2017, the company introduced an AI tool that can detect image composition and where the subject matter appears in the frame, and in 2021, it acquired three AI platforms in order to sort through content, build better computer vision models, and surface data insights.   

This new feature will launch on Shutterstock.com in the coming months, and when it does, customers will be able to use the AI image generator to call up strange and surreal stock photos with text prompts (here are some examples of what they would look like). 

New tools come with potentially new revenue streams for the company’s contributor community. Shutterstock also announced that it will roll out a royalties compensation framework for artists whose work have been used in the development of the AI models. 

[Related: Adobe’s new AI can turn a 2D photo into a 3D scene]

“The data we licensed from Shutterstock was critical to the training of DALL-E,” Sam Altman, OpenAI’s CEO, said in a press release. “We’re excited for Shutterstock to offer DALL-E images to its customers as one of the first deployments through our API, and we look forward to future collaborations as artificial intelligence becomes an integral part of artists’ creative workflows.”

As AI-generated art becomes more mainstream, conversations have sprouted up in the space about the complicated ownerships behind these works. 

Interestingly, in September, Shutterstock and Getty were reportedly removing and even banned AI-made photos on their sites, citing copyright concerns. More specifically, The Verge noted that Shutterstock, following the recent announcement, will be banning AI-generated artworks that were not created with Dall-E. The reason behind this being that Shutterstock might not be able to “validate the model used to create the content so can’t be sure who owns the copyright.”

The compensation model that Shutterstock has in mind is a way to share the revenue from image downloads, data shares, and other forms of licensing. “The share individual contributors receive will be proportionate to the volume of their content and metadata that is included in the purchased datasets,” a Shutterstock spokesperson told The Verge. But some artists have expressed their doubts about the actual payout of the fund. 

The post Shutterstock and OpenAI have come up with one possible solution to the ownership problem in AI art appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
What Pong-playing brain cells can teach us about better medicine and AI https://www.popsci.com/technology/cortical-labs-dishbrain-pong/ Fri, 21 Oct 2022 18:30:00 +0000 https://www.popsci.com/?p=480122
brain cells on a multi-electrode array
Brain cells on a multi-electrode array. Cortical Labs

A system called “DishBrain” could be important for testing epilepsy and dementia drugs in the future.

The post What Pong-playing brain cells can teach us about better medicine and AI appeared first on Popular Science.

]]>
brain cells on a multi-electrode array
Brain cells on a multi-electrode array. Cortical Labs

Scientists have taught 800,000 living brain cells in a dish how to play the iconic arcade game “Pong.” 

This is the work of a team of neuroscientists and programmers from Cortical Labs, Monash University, RMIT University, University College London and the Canadian Institute for Advanced Research. Their detailed findings were published earlier this month in the journal Neuron

Of course, the actual setup is more complex than just putting a glob of neurons in a Petri dish. In this system, called DishBrain, the nerve cells are overlaid on a multi-electrode array, which is like a sort of CMOS chip that’s able read very small changes in the electrical activity in the neuron. 

Nerve cells have well-known action potentials—they will fire in response to a certain sequence of voltage changes across the cell membrane. This makes them behave almost like gates in a computer circuit. 

The cells connect to each other, integrate into the chip, and can survive for many months. The electrode array allows researchers to send and read out signals from the nerve cells at specific locations on the grid, at a given rate. So, electrodes on the array could fire on one side or the other to tell DishBrain where the ball was on, and the frequency of signals could indicate how far away the ball was from the paddle. By lighting up a certain pre-programmed arrangement of electrodes, DishBrain could trigger motor activities, like moving the in-game paddle up and down.

“We can kind of decode information going out and encode information going in just through these very small electrical signals and use that to represent what’s happening to the cells,” says Brett Kagan, chief scientific officer of biotech start-up Cortical Labs and the lead author of the Neuron paper. “Video games help people understand what’s going on. If we simply did it as a function of random numbers, people wouldn’t appreciate or understand the significance of the results.”

But why did they choose “Pong?”

“From a science perspective, we needed a task that was real-time, continuous, and had a really discrete lose condition (the win condition wasn’t so important) that was pretty easy to conceptualize and to encode into the cells,” says Kagan. 

It is also a game that has been a testbed staple for the computational neuroscience community. For example, in 2013, Google’s DeepMind used “Pong” to train its machine learning algorithms. 

“If you think about it, there are really around six rules about how this environment works. This is what we call a structured information landscape,” says Hon Weng Chong, chief executive officer of Cortical Labs. “The conclusion that we have is that these neurons must be trying to create a model internally [influenced by these six rules]. Whatever it is, we don’t quite know yet, and this is out for further study. And it’s trying to use that to optimize for the parameters that we set, which is: don’t miss the ball, hit the ball.” 

[Related: How an AI managed to confuse humans in an imitation game]

Beyond figuring out how exactly perceived “intelligence” is arising from DishBrain, as a next step, the team wants to put its performance to the test against an artificial neural network. They also want to see how well DishBrain plays the game while under the influence of drugs and alcohol.

“We want to show that there’s a dose response curve on their ability to play the game as a way of validating that these neurons can be used in actual drug assays and discoveries and also for personalized medicine,” says Chong.

When Cortical Labs emerged from stealth in March 2020, it had a goal of building biological computer chips. 

Brain cells, Kagan notes, are an interesting biomaterial system that can efficiently process information in real-time without the need for mountains of input samples. “A fly, a very simple system, has more general intelligence in terms of navigating its environment than the best machine learning,” he says. “It does so with a fraction of the power consumption. Why mimic what you can harness?”

But while using these kinds of chips for applications in computer science research is interesting, Cortical Labs has a more immediate focus for how it plans to commercialize its tech platform. 

[Related: Growing Micro-Brains From Skin Cells Sheds Light On Autism]

“I think the primary commercial aspect for us is helping researchers in very difficult spaces like dementia research, epilepsy, and even depression, use the technology we’ve developed to look for new therapies and new drugs,” says Chong. “That’s kind of the commercialization point of view we’re trying to observe at the company.”

Since the nerve cells used in DishBrain can be derived from pluripotent human stem cells, this opens possibilities for personalized medicine. “You can take samples from donors, grow genotypically similar neurons, which we can then use to test drugs, which will then hopefully have the same parameters as donor cells,” Chong says. This could possibly shorten the process of trialing through different treatments for conditions like epilepsy. “If you had a system where you could get your answer immediately of which drug to take for the best result with the least amount of side effects, this would be a huge game changer for the lives of many people with this disease.” 

The post What Pong-playing brain cells can teach us about better medicine and AI appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Why a tech giant is embracing old-school magnetic tape storage https://www.popsci.com/technology/ibm-diamondback-tape-library/ Thu, 20 Oct 2022 19:00:00 +0000 https://www.popsci.com/?p=479859
ibm diamondback tape storage library
IBM

IBM believes that tape technology will continue to be valuable for modern companies.

The post Why a tech giant is embracing old-school magnetic tape storage appeared first on Popular Science.

]]>
ibm diamondback tape storage library
IBM

This week, IBM introduced the Diamondback Tape Library, reaffirming its position that magnetic tape is a form of data storage that’s here to stay. Yes, magnetic tape, kind of like what’s spooled inside the VHS tapes and cassettes of last century.

“Tape has been known to live in the middle of the desert for 40 years and still be recoverable,” says Shawn Brume, IBM’s storage strategist. “In 2010, the health of tape was really understood when all of the data used for [the Nimbus satellite project] were recovered from tapes that were at that time, 46-years-old.” 

IBM argues that this feature makes tape the ideal medium to store archival data that doesn’t need to be frequently accessed. Tape can also serve as an “air gap” backup—offline versions of important or sensitive files that are resistant to cyberattacks. The types of data that stay on tape span financial records, medical records, personally identifiable information, and documents that are part of a legal hold, Brume notes.  

In addition to offering protection against ever-evolving cyber threats, it’s also more conservative with power use. “Tape uses no energy with data at rest. Even when it’s drawing files back, it’s doing it with very little energy because it’s not trying to do it in a high-performance manner,” Brume says. 

The specs of IBM’s tapes

So what can magnetic tape do? 

A single tape is about 3 inches by 3 inches, and 3/4 of an inch thick. It’s smaller than a hard disk drive (HDD), but weighs about 0.6 kilograms (roughly a bit more than a pound). One cartridge can store 18 terabytes of data uncompressed, and 45 terabytes compressed. IBM is working on doubling this capacity in the next generation of the technology. 

A library of tapes can range in size from something you can put on your desk to something that’s the size of a small refrigerator (about 8 square feet). The small refrigerator-sized library can hold 1,584 cartridges. IBM touts that its Diamondback Library will be the densest tape library on the market. It will be able to hold 69 petabytes of information while taking up less than 8-square-feet of space. 

“You get one stream of data out of one tape drive, you access the data, you have to wait a little while but it’s still coming back to you pretty fast: 1,000 megabits per second, compressed,” says Brume. 

How tape matches up to other storage tech

In a highly digital world, tape is one of only technologies that uses analog signals to move part of the data, Brume says. “At its fundamental, tape is a lot like HDD. It uses magnetic-based materials. But in this case, tape is literally an underlayment of material [usually nylon] that has magnetic coating on it. And rather than a spinning disk, the tape comes in, it’s threaded. It can look like a VHS tape, but it’s much more robust.” 

The tape moves linearly into the tape drive and out into the tape cartridge. To write, the tape head takes electronic signals and creates a mini magnetic field that can change the polarity of the material on the film to make a pattern of zeros and ones. Once data is written on tape, it can’t be changed (but it can be erased and rewritten).

“The immutability and encryption capabilities of LTO tape [an open standards magnetic tape data storage technology], as well as the simplicity of creating an ‘air gap’ (removing the tape and storing it a vault) makes tape a key weapon in assuring data survival in the face of ransomware,” says Phil Goodwin, research vice president of infrastructure software platforms at IDC, an IT market intelligence firm.

Despite appearances, the technology that allows magnetic tape to achieve its storage capacity is actually very advanced. “These include advanced very thin plastic substrates, sophisticated multichannel heads, advanced read sensors employing giant magnetoresistance, advanced tracking systems and novel media like barium ferrite particles,” says James Bain, a professor of electrical and computer engineering at Carnegie Mellon University.

HDD writes data not in linear paths, but in concentric, circular paths. This means that with HDD, the head can easily be moved to a spot on the disk for easy data access, whereas with tape, that recall capacity is slower. “And certainly, we’re all slower than flash because flash is just electrons that have been charged up,” Brume notes. 

However, the advantage of tape is that it never has to be charged up. “You can take the tape, you can set it on the shelf,” Brume says. “If you leave an HDD powered down for too long, the head ends up with goopy stuff on it, and then it just won’t write or read.” 

Additionally, the integrity of data stored on tape doesn’t really deteriorate that much, even over long periods of time. “If you have a laptop and you have an HDD in it for a long time, you’ll notice when it comes off it’s really slow,” says Brume. “We don’t have that problem with tapes. Tapes are designed to not be accessed.”

Despite its benefits, Brume believes that tape, HDD, and more speculative forms of data storage, such as DNA, all have their own place in the technology ecosystem. That belief is echoed by some experts in the field.

“The message about tape is that, just as SSDs did not displace HDDs, HDDs, in turn will not displace tape systems. All of them co-exist in a tiered storage architecture where cost and latency are balanced at every level of the system,” says Bain. “As data centers grow, these tiered approaches make more sense to implement because the financial stakes become higher.”

[Related: Can these government efforts crack the code for DNA storage?]

The IBM tape team is part of the DNA storage alliance. “We’re very much into it because it represents another tier. It’s just like the difference between HDD and tape. HDD is faster than tape, and if you read the technology on DNA, you’re talking about 2 megabits per second,” Brume says. “But if you need to keep the data, and you want multiple copies, and it doesn’t matter how fast you’re going to get it back, but you need to retain it in perpetuity, DNA is going to be a great choice in the future because it’s very open. We understand it very well. We know how to map it out.”

And for the innovators of using laser etching on glass discs for data storage, Brume says that they still have to contend with the challenges around creating cooling environments for the lasers, and the amount of power and the laser required. 

To Bain, although competing technologies like holographic or burned glass or DNA are interesting, he believes that there are still some kinks that need to be worked out first. “Things like DNA are extremely appealing because the densities they can achieve are potentially so high,” he says. “However, a host of other issues must be addressed in any storage system like reading and writing mechanisms, latencies, robustness, error correction and redundancy, etc.”

What companies are still using tape in their operations

Hyper-scales—companies that have grown so large that they’re offering their own infrastructures, or have massive data as a result of their infrastructure—always need many different forms of technology to handle the variety of data that come into their systems to power a range of services. 

These are organizations like CERN, as well as corporations like Amazon, Google, Meta, Baidu, Alibaba, and Tencent. 

“Magnetic tape beats hard disk and flash on longevity, financial cost, and carbon footprint cost, but loses out on access speed,” says Johnny Yu, a research manager at IDC. “You won’t want to put live production data or even backups on tape, but it’s perfect for any infrequently accessed data that needs to be retained for a very long time such as medical records or archival data.”

Christophe Bertrand, practice director of data management and analytics at IT analytics firm ESG, notes that there is no shortage of data. “Tape is a unique medium that allows a ton of data to be stored relatively quickly,” he says. And new use cases for these kinds of valuable but infrequently accessed data, like rendering systems, analytics, models and simulations, are always emerging. 

As enterprises push for vendors to meet sustainability requirements, tape is becoming more popular because of its low power consumption and its recyclability, Bertrand says. In the long run, tape will surely see more competition in the space, he notes, “but how long will it take for this new tech to be mature, and be at a scale and price point that makes sense?”

The post Why a tech giant is embracing old-school magnetic tape storage appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
As California plans for a new desalination plant, take a look at how these facilities work https://www.popsci.com/technology/california-doheny-desalination-plant/ Mon, 17 Oct 2022 11:00:00 +0000 https://www.popsci.com/?p=478175
laguna beach
Derek Liang / Unsplash

The Doheny Ocean Desalination Project, estimated to be completed in 2027, will provide 5 million gallons of drinking water a day to residents in Orange County.

The post As California plans for a new desalination plant, take a look at how these facilities work appeared first on Popular Science.

]]>
laguna beach
Derek Liang / Unsplash

California, a state that has been facing devastating droughts and wildfires, approved a $140 million desalination plant on October 13 that would enable it to convert seawater into fresh water. It will join a cadre of 12 other facilities currently operating off the coast of California. 

While getting plans for the Doheny Ocean Desalination Project past the California Coastal Commission was a major regulatory hurdle that has since been cleared, the project will still need other state permits before construction can begin, according to Reuters. The proposed facility (estimated to be completed in 2027) will provide 5 million gallons of drinking water for 40,000 people in Orange County daily. This will reduce the district’s reliance on water imported from  the State Water Project and the Colorado River by up to 70 percent and bolster the emergency supply, LA Times reported. 

Desalination plants are useful for times of drought, but can often face pushback from environmental concerns that are directly tied to how the plants work. 

[Related: In photos: Dubai’s massive desalination plant]

Let’s take a look at how a desalination facility turns ocean water potable, and explore the workings of an already existing facility, the Lewis Carlsbad Desalination Plant, the biggest of its kind in the state (it produces 50 million gallons of drinking water a day). 

After seawater is sucked in, it goes through sand pretreatment filters that remove large objects and other solid materials. Then, the water goes through a reverse osmosis process, where membranes (which are semi-permeable barriers that only let molecules of certain sizes, shapes, and charges pass across) and filters separate out dissolved minerals, salts, and other impurities. Then chemicals are added in, and it’s tested before it’s sent out to consumers. 

Plans for a larger, privately owned plant called Poseidon were rejected just a few months ago by Coastal Commission because that plant would’ve sucked in water from above the ocean floor, which would be harmful to animals and other organisms that lived there. Experts told PopSci in April that a slew of small desalination plants would be considerably better than one massive plant. These plants could also function like “washing machines” that help recycle groundwater and other dirty water. 

Doheny, the new plant, would not only be smaller in size compared to the proposed Poseidon, but it will use advanced slant wells to pull water from beneath the ocean floor, Cal Matters reported. However, more research is needed to assess its impact on deep ocean floor creatures. 

The other big ecological concern with desalination plants in general is the hypersaline brine discharged at the end of the reverse osmosis process. This is made up of salt, minerals, and other byproducts from desalination. Since brine is more dense than the water that’s sucked into the plant in the beginning, it could settle onto the seafloor and negatively impact the organisms there. This presents an issue, especially in areas with sensitive ecosystems. Doheny said that it will mix its brine with the district’s existing wastewater lines and dilute it before expelling it about two miles out into the ocean, LA Times reported. 

Finally, there are questions around how much energy these plants use. According to the Department of Energy, “large-scale desalination systems require tens of megawatts to run and provide tens of million gallons of desalinated water per day. Small-scale systems vary in size from tens to hundreds of kilowatts and provide hundreds to thousands of gallons of water per day.” 

Some researchers have been looking into ways to hook up the facility to wind turbines or wave energy systems. On that end, Doheny hopes to power around 15 percent of the energy-intensive process through solar panels, and integrate an energy recovery process (likely by capturing hydraulic energy from the high pressure pumps used during reverse osmosis). It says on its project website that this would result in “45 to 55 percent less energy usage than systems without that feature.”

The post As California plans for a new desalination plant, take a look at how these facilities work appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Your gaming skills could help teach an AI to identify jellyfish and whales https://www.popsci.com/technology/mbari-ocean-vision-ai/ Wed, 12 Oct 2022 23:00:00 +0000 https://www.popsci.com/?p=477378
Barreleye fish from a deep-sea video taken by robotic ocean rovers in Monterey Bay
The barreleye fish is the living version of the galaxy-brain meme. MBARI

Marine biologists have too many images and not enough time to hand-annotate them all. This new project wants to help.

The post Your gaming skills could help teach an AI to identify jellyfish and whales appeared first on Popular Science.

]]>
Barreleye fish from a deep-sea video taken by robotic ocean rovers in Monterey Bay
The barreleye fish is the living version of the galaxy-brain meme. MBARI

Today, there are more ways to take photos of the underwater world than anyone could have imagined at the start of the millennia, thanks to ever-improving designs for aquatic cameras. On one hand, they have provided illuminating views of life in the seas. But on the other hand, these devices have inundated marine biologists with mountains of visual data that have become incredibly tedious and time-consuming to sort through. 

The Monterey Bay Aquarium Research Institute in California has proposed a solution: a gamified machine-learning platform that can help process videos and images. It’s called Ocean Vision AI, and it works by combining human-made annotations with artificial intelligence. Think of it like the ebird or iNaturalist app, but modified for marine life. 

The project is a multidisciplinary collaboration between data scientists, oceanographers, game developers, and human-computer interaction experts. On Tuesday, the National Science Foundation showed support for the two-year-project by awarding it $5 million in funding. 

“Only a fraction of the hundreds of thousands of hours of ocean video and imagery captured has been viewed and analyzed in its entirety and even less shared with the global scientific community,” Katy Croff Bell, founder and president of the Ocean Discovery League and a co-principal investigator for Ocean Vision AI, said in a press release. Analyzing images and videos in which organisms are interacting with their environment and with one another in complex ways often require manual labeling by experts, a resource-intensive approach that is not easily scalable.

[Related: Why ocean researchers want to create a global library of undersea sounds]

“As more industries & institutions look to utilize the ocean, there is an increased need to understand the space in which their activities intersect. Growing the BlueEconomy requires understand[ing] its impact on the ocean environment, particularly the life that lives there,” Kakani Katija, a principal engineer at MBARI and the lead principal investigator for Ocean Vision AI, wrote in a Twitter post.

Here’s where artificial intelligence can come in. Marine biologists have already been experimenting with using AI-software to classify sounds, like whale songs, in the ocean. The idea of Ocean Vision AI is to create a central hub that can collect new and existing underwater visuals from research groups, use these to train an organism-identifying artificial intelligence algorithm that tell apart the crab versus the sponge in frame, for example, and share the annotated images with the public and the wider scientific community as a source of open data

[Related: Jacques Cousteau’s grandson is building a network of ocean floor research stations]

A key part of the equation is an open-source image database called FathomNet. According to NSF’s 2022 Convergence Accelerator portfolio, “the data in FathomNet are being used to inform the design of the OVAI [Ocean Vision AI] Portal, our interface for ocean professionals to select concepts of interest, acquire relevant training data from FathomNet, and tune machine learning models. OVAI’s ultimate goal is to democratize access to ocean imagery and the infrastructure needed to analyze it.”

Ocean Vision AI will also have a video-game component that serves to engage the public in the project. The video game the team is developing “will educate players while generating new annotations” that can improve the accuracy of the AI models. 

Although the game is still in prototype testing, a sneak peek of it can be seen in a video posted by NSF to YouTube showing an interface which asks users whether a photo they saw contained a jellyfish (images of what a jellyfish looks like are present at the top of the screen). 

Here’s what the current timeline for the project looks like. By next summer, the team will expect the first version of FathomNet (which is in beta right now) to be active, with a preliminary set of data. In 2024, the team will start exporting machine learning-labeled ecological survey data to repositories like the Global Biodiversity Information Facility, and look into building a potential subscription model for institutions. During this time, the modules of the video game will be integrated into other popular games as well as museum and aquarium experiences. After field testing different versions, the team will finalize their design and release a standalone, multiplatform game in late 2024. 

“The ocean plays a vital role in the health of our planet, yet we have only observed a tiny fraction of it,” Katija said in a press release. “Together, we’re developing tools that are urgently needed to help us better understand and protect our blue planet.”

The post Your gaming skills could help teach an AI to identify jellyfish and whales appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A new VR exhibit takes you inside the James Webb Space Telescope’s images https://www.popsci.com/technology/ashley-zelinskie-jwst-vr-exhibit/ Mon, 10 Oct 2022 20:00:00 +0000 https://www.popsci.com/?p=476083
a 3d printed sculpture of hands on top of the james webb telescope mirrors
Exploration JWST by Ashley Zelinskie. Patrick Delaney

Artist Ashley Zelinskie has filled a physical gallery with space-inspired sculptures, fog, lasers, and a VR headset.

The post A new VR exhibit takes you inside the James Webb Space Telescope’s images appeared first on Popular Science.

]]>
a 3d printed sculpture of hands on top of the james webb telescope mirrors
Exploration JWST by Ashley Zelinskie. Patrick Delaney

In New York City’s ONX Studio, bits and pieces of the universe, as seen through the eyes of the James Webb Space Telescope (JWST), are on display. It’s a new exhibit that opened last week from Mozilla Hubs, artist Ashley Zelinskie, and NASA called “Unfolding the Universe: A NASA Webb VR Experience.” It was created to commemorate the launch of the space telescope last December.  

Dispersed throughout the exhibition space are rooms with projected movies, desktop computers for users to try the online experience, silk prints, fake fog and laser lights (emulating the birth of stars), and conceptual sculptures inspired by interstellar travel.

At the center of the exhibit’s main room is a spot reserved for the virtual reality aspects of the experience—a digital gallery modeled after the images of galaxies and other celestial bodies from JWST. 

Space Telescope photo
Patrick Delaney

Last Wednesday night, former astronaut Mike Massimino was decked out in a VR headset, headphones, and hand controllers, and ambled around an area whose virtual and physical boundaries have been marked out in the gallery with an outline of white masking tape. (Viewers at home can also join in this part of the exhibition from browsers on their phones, laptop, or desktop here.)

“I’m an astronaut but I’m not a young person who does a lot of virtual reality gaming. I don’t know if I controlled it as well as it could be controlled,” Massimino tells PopSci. Massimino, who once went on spacewalking missions to repair and update the various elements on the Hubble Space Telescope in 2002 and 2009, has a special type of appreciation for the engineering it takes to collect the information needed to make science discoveries in space. ”I worked on Hubble. I can appreciate the images. What [Zelinskie] has been able to do is apply an artistic interpretation of that wonder and discovery to it,” he says.

Space Telescope photo
Zelinskie and Massimino playing with the VR component of the exhibit. Patrick Delaney

The virtual experience runs kind of like an online game. Viewers can navigate around a series of corridors in outer space and visit animated artworks or interactive avatars of scientists that Zelinskie interviewed in the process. 

“She kept a lot of the details. What she made here is true to the science behind it and the way that the telescope works,” Massimino adds. “What I like in general about all of this stuff is that it’s taking very technical scientific discovery and it shows the beauty of images, and the beauty of the science behind it, but in a very artistic way so you can engage it at a different level.” 

The James Webb Space Telescope in VR

Zelinskie’s collaboration with NASA and the JWST team started around seven years ago. Since COVID, they had been brainstorming creative ways to engage the public, and landed on the idea of creating a VR experience. They enlisted London-based virtual architects Metaxu Studios and Mozilla Hubs to develop the concept they had in mind. 

[Related: Dive into the wonderful and wistful world of video game design]

“We were able to host a viewing party of the James Webb telescope launch on Christmas with a bunch of scientists and the public and we watched NASA Live TV in our Hubs space. We had each of the scientists in VR as avatars, and we streamed it to YouTube,” Zelinskie, a conceptual and mixed media artist, tells PopSci

When the JWST images were released by NASA in July, she wanted to incorporate some of the updated visual elements into an exhibit. 

She added a window of aurora borealis based on the spectroscopy graphs and data from JWST’s first images of exoplanets. There’s also a recurring motif of hexagons that appears in multiple installations, both in person and online. “The reason that they’re hexagons is because they had to fold up into the space capsule. That’s why the show is called ‘Unfolding the Universe,’ because the telescope had to unfold,” Zelinskie explains. “The cool thing about the hexagonal shape of mirrors is it makes this six-pointed star. You’re going to know it’s a Webb image because the stars in that image are going to have the same shape. It’s kind of like an artist signing its work.” 

Zelinskie also conducted interviews with several scientists and engineers, asking them about their career journeys, and their experiences working with JWST. 

“I wanted to house different portraits of the scientists; we did all the sound mapping so when you walk up to them, you can hear the sound of the interview, but then when you walk away, you’re not hearing it,” Zelinskie says. There’s a soundscape running across the virtual gallery that changes depending on where you are in the space. “That’s what [Mozilla] Hubs is really good at—sound tracking.”

Building out the virtual space

John Shaughnessy, Mozilla Hubs’ senior ecosystem and engineering manager, attests that enabling this kind of spatial audio in a device-agnostic browser setting is definitely challenging work. 

There are lots of features to consider, like distance-based fall-off of sound, so conversations close to users are loud, and those further away are quieter. There are also considerations around how sound propagates in the real world. Sounds are different in a room with curtains on the walls versus in a room that has solid metal surfaces. “In fact, we’ve had blind users in Mozilla Hubs who have built add-ons for themselves, customizing the code so they can send audio pings out into the world and listen to how sound bounces off of virtual surfaces to navigate the 3D space without the use of eyesight,” Shaughnessy says. Plus, they have to consider the different qualities of microphones from different users, and noise from things like keyboard typing sounds. 

Space Telescope photo
The VR portion of the exhibit can be accessed by anyone anywhere through their browsers. Patrick Delaney

But it’s part of a larger effort to build the tech backbone that will one day power all types of immersive virtual and metaverse interactions. And these are problems that all metaverse and virtual reality platforms face.

“I think groups of people are going to want to meet in virtual spaces with one another, and we’re going to take that for granted. What we’re trying to do is build the bare bones, basic necessities so that it happens in an open and decentralized way,” Shaughnessy says. “For that we need two things. We need people to have a shared spatial awareness. The second one is a shared sense of presence.”

To this end, Shaughnessy says that they have been borrowing 3D graphics tricks used in game rendering to give the illusion of realism. For example, they use baked lighting to calculate shadows and reflections for fixed objects in the scene ahead of time, so that math doesn’t need to be done in real-time. They also use “level of detail” to keep objects close to the user high-definition while conserving overall memory. 

In this project specifically, Shaughnessy and Mozilla Hubs built the technology that renders the 3D scene of the meeting space and virtual gallery that Zelinskie and the JWST team came up with. “We gave them a tool where they can customize the look, the avatars that are in there, and how they can present this experience. We don’t control who comes and goes. We don’t monitor what you’re doing in that space,” says Shaughnessy.

The sound of the universe cannot travel through the actual vacuum that is outer space. “Inside your space suit, when you’re space walking, it’s really quiet. You can bang with a hammer, and they’ll hear it inside the spaceship because the sound can travel within the structure, but you can’t hear anything,” Massimino notes. “You can hear yourself breathing inside. You can hear people talking to you in your headset. But what you always hear in the background is the whirring of a fan, which tells you your space suit is working, that air is being circulated, that you have power.”

While the soundscape broadcasted inside his VR headset uses a bit of artistic license, he can just pick up the faint, yet familiar whirring of equipment in the background during his virtual space walk. “It’s a comforting sound.”

Unfolding the Universe: First light will be on display at ONX studio in Manhattan, New York through October 23, 2022. Join the VR space from a browser here.

The post A new VR exhibit takes you inside the James Webb Space Telescope’s images appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>