Image Credit: Viktoria Ruban/Getty Images
Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.
The year is 2052. The world has averted the climate crisis thanks to finally adopting nuclear power for the majority of power generation. Conventional wisdom is now that nuclear power plants are a problem of complexity; Three Mile Island is now a punchline rather than a disaster. Fears around nuclear waste and plant blowups have been alleviated primarily through better software automation. What we didn’t know is that the software for all nuclear power plants, made by a few different vendors around the world, all share the same bias. After two decades of flawless operation, several unrelated plants all fail in the same year. The council of nuclear power CEOs has realized that everyone who knows how to operate Class IV nuclear power plants is either dead or retired. We now have to choose between modernity and unacceptable risk.
Artificial Intelligence, or AI, is having a moment. After a multi-decade “AI winter,” machine learning has awakened from its slumber to find a world of technical advances like reinforcement learning, transformers and more with computational resources that are now fully baked and can make use of these advances.
AI’s ascendance has not gone unnoticed; in fact, it has spurred much debate. The conversation is often dominated by those who are afraid of AI. These people range from ethical AI researchers afraid of bias to rationalists contemplating extinction events. Their concerns tend to revolve around AI that is hard to understand or too intelligent to control, ultimately end-running the goals of us, its creators. Usually, AI boosters will respond with a techno-optimist tack. They argue that these worrywarts are wholesale wrong, pointing to their own abstract arguments as well as hard data regarding the good work that AI has done for us so far to imply that it will continue to do good for us in the future.
Both of these views are missing the point. An ethereal form of strong AI isn’t here yet and probably won’t be for some time. Instead, we face a bigger risk, one that is here today and only getting worse: We are deploying lots of AI before it is fully baked. In other words, our biggest risk is not AI that is too smart but rather AI that is too dumb. Our greatest risk is like the vignette above: AI that is not malevolent but stupid. And we are ignoring it.
MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.
Dumb AI is already out there
Dumb AI is a bigger risk than strong AI principally because the former actually exists, while it is not yet known for sure whether the latter is actually possible. Perhaps Eliezer Yudkowsky put it best: “the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”
Real AI is in actual use, from manufacturing floors to translation services. According to McKinsey, fully 70% of companies reported revenue generation from using AI. These are not trivial applications, either — AI is being deployed in mission-critical functions today, functions most people still erroneously think are far away, and there are many examples.
The US military is already deploying autonomous weapons (specifically, quadcopter mines) that do not require human kill decisions, even though we do not yet have an autonomous weapons treaty. Amazon actually deployed an AI-powered resume sorting tool before it was retracted for sexism. Facial recognition software used by actual police departments is resulting in wrongful arrests. Epic System’s sepsis prediction systems are frequently wrong even though they are in use at hospitals across the United States. IBM even canceled a $62 million clinical radiology contract because its recommendations were “unsafe and incorrect.”
The obvious objection to these examples, put forth by researchers like Michael Jordan, is that these are actually examples of machine learning rather than AI and that the terms should not be used interchangeably. The essence of this critique is that machine learning systems are not truly intelligent, for a host of reasons, such as an inability to adapt to new situations or a lack of robustness against small changes. This is a fine critique, but there is something important about the fact that machine learning systems can still perform well at difficult tasks without explicit instruction. They are not perfect reasoning machines, but neither are we (if we were, presumably, we would never lose games to these imperfect programs like AlphaGo).
Usually, we avoid dumb-AI risks by having different testing strategies. But this breaks down in part because we are testing these technologies in less arduous domains where the tolerance for error is higher, and then deploying that same technology in higher-risk fields. In other words, both the AI models used for Tesla’s autopilot and Facebook’s content moderation are based on the same core technology of neural networks, but it certainly appears that Facebook’s models are overzealous while Tesla’s models are too lax.
Where does dumb AI risk come from?
First and foremost, there is a dramatic risk from AI that is built on fundamentally fine technology but complete misapplication. Some fields are just completely run over with bad practices. For example, in microbiome research, one metanalysis found that 88% of papers in its sample were so flawed as to be plainly untrustworthy. This is a particular worry as AI gets more widely deployed; there are far more use cases than there are people who know how to carefully develop AI systems or know how to deploy and monitor them.
Another important problem is latent bias. Here, “bias” does not just mean discrimination against minorities, but bias in the more technical sense of a model displaying behavior that was unexpected but is always biased in a particular direction. Bias can come from many places, whether it is a poor training set, a subtle implication of the math, or just an unanticipated incentive in the fitness function. It should give us pause, for example, that every social media filtering algorithm creates a bias towards outrageous behavior, regardless of which company, country or university produced that model. There may be many other model biases that we haven’t yet discovered; the big risk is that these biases may have a long feedback cycle and only be detectable at scale, which means we will only become aware of it in production after the damage is done.
There is also a risk that models with such latent risk can be too widely distributed. Percy Liang at Stanford has noted that so-called “foundational models” are now deployed quite widely, so if there is a problem in a foundational model it can create unexpected issues downstream. The nuclear explosion vignette at the start of this essay is an illustration of precisely that kind of risk.
As we continue to deploy dumb AI, our ability to fix it worsens over time. When the Colonial Pipeline was hacked, the CEO noted that they could not switch to manual mode because the people who historically operated the manual pipelines were retired or dead, a phenomenon called “deskilling.” In some contexts, you might want to teach a manual alternative, like teaching military sailors celestial navigation in case of GPS failure, but this is highly infeasible as society becomes ever more automated — the cost eventually becomes so high that the purpose of automation goes away. Increasingly, we forget how to do what we once did for ourselves, creating the risk of what Samo Burja calls “industrial exhaustion.”
The solution: not less AI, smarter AI
So what does this mean for AI development, and how should we proceed?
AI is not going away. In fact, it will only get more widely deployed. Any attempt to deal with the problem of dumb AI has to deal with the short-to-medium term issues mentioned above as well as long-term concerns that fix the problem, at least without depending on the deus ex machina that is strong AI.
Thankfully, many of these problems are potential startups in themselves. AI market sizes vary but can easily exceed $60 billion and 40% CAGR. In such a big market, each problem can be a billion-dollar company.
The first important issue is faulty AI stemming from poor development or deployment that flies against best practices. There needs to be better training, both white labeled for universities and as career training, and there needs to be a General Assembly for AI that does that. Many basic issues, from proper implementation of k-fold validation to production deployment, can be fixed by SaaS companies that do the heavy lifting. These are big problems, each of which deserves its own company.
The next big issue is data. Whether your system is supervised or unsupervised (or even symbolic!), a large amount of data is needed to train and then test your models. Getting the data can be very hard, but so can labeling, developing good metrics for bias, making sure that it is comprehensive, and so on. Scale.ai has already proven that there is a large market for these companies; clearly, there is much more to do, including collecting ex-post performance data for tuning and auditing model performance.
Lastly, we need to make actual AI better. we should not fear research and startups that make AI better; we should fear their absence. The primary problems come not from AI that is too good, but from AI that is too bad. That means investments in techniques to decrease the amount of data needed to make good models, new foundational models, and more. Much of this work should also focus on making models more auditable, focusing on things like explainability and scrutability. While these will be companies too, many of these advances will require R&D spending within existing companies and research grants to universities.
That said, we must be careful. Our solutions may end up making problems worse. Transfer learning, for example, could prevent error by allowing different learning agents to share their progress, but it also has the potential to propagate bias or measurement error. We also need to balance the risks against the benefits. Many AI systems are extremely beneficial. They help the disabled navigate streets, allow for superior and free translation, and have made phone photography better than ever. We don’t want to throw out the baby with the bathwater.
We also need to not be alarmists. We often penalize AI for errors unfairly because it is a new technology. The ACLU found Congressman John Lewis was mistakenly caught up in a facial recognition mugshot; Congressman Lewis’s status as an American hero is usually used as a “gotcha” for tools like Rekognition, but the human error rate for police lineups can be as high as 39%! It is like when Tesla batteries catch fire; obviously, every fire is a failure, but electric cars catch fire much less often than cars with combustion engines. New can be scary, but Luddites shouldn’t get a veto over the future.
AI is very promising; we just need to make it easy to make it truly smart every step of the way, to avoid real harm and, potentially, catastrophe. We have come so far. From here, I am confident we will only go farther.
Evan J. Zimmerman is the founder and CEO of Drift Biotechnologies, a genomic software company, and the founder and chairman of Jovono, a venture capital firm.
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!
NASA Says Hurricane Didn’t Hurt Artemis I Hardware, Sets New Launch Window
NASA’s Artemis I moon mission launch, stalled by Hurricane Ian, has a new target for takeoff. The launch window for step one of NASA’s bold plan to return humans to the lunar surface now opens Nov. 12 and closes Nov. 27, the space agency said Friday.
The news comes after the pending storm caused NASA to scrub the latest Artemis I Iaunch, which had been scheduled for Sunday, Oct. 2. As Hurricane Ian threatened to travel north across Cuba and into Florida, bringing rain and extreme winds to the launch pad’s vicinity, NASA on Monday rolled its monster Space Launch System rocket, and the Orion spacecraft it’ll propel, back indoors to the Vehicle Assembly Building at Florida’s Kennedy Space Center.
The hurricane made landfall in Florida on Wednesday, bringing with it a catastrophic storm surge, winds and flooding that left dozens of people dead, caused widespread power outages and ripped buildings from their foundations. Hurricane Ian is “likely to rank among the worst in the nation’s history,” US President Joe Biden said on Friday, adding that it will take “months, years, to rebuild.”
Initial inspections Friday to assess potential impacts of the devastating storm to Artemis I flight hardware showed no damage, NASA said. “Facilities are in good shape with only minor water intrusion identified in a few locations,” the agency said in a statement.
Next up, teams will complete post-storm recovery operations, which will include further inspections and retests of the flight termination system before a more specific launch date can be set. The new November launch window, NASA said, will also give Kennedy employees time to address what their families and homes need post-storm.
Artemis I is set to send instruments to lunar orbit to gather vital information for Artemis II, a crewed mission targeted for 2024 that will carry astronauts around the moon and hopefully pave the way for Artemis III in 2025. Astronauts on that high-stakes mission will, if all goes according to plan, put boots on the lunar ground, collect samples and study the water ice that’s been confirmed at the moon’s South Pole.
The hurricane-related Artemis I rollback follows two other launch delays, the first due to an engine problem and the second because of a hydrogen leak.
Hurricane Ian has been downgraded to a post-tropical cyclone but is still bringing heavy rains and gusty winds to the Mid-Atlantic region and the New England coast.
What You Get in McDonalds’ New Happy-Meal-Inspired Box for Adults
You’ve pulled up to McDonald’s as a full-on adult. You absolutely do not need a toy with your meal, right? Joking. Of course you do.
The fast-food chain will soon sell boxed meals geared toward adults, and each one has a cool, odd-looking figurine inside.
The meal has an odd name — the Cactus Plant Flea Market Box — that’s based on the fashion brand collaborating with McDonald’s on this promotion.
According to McDonald’s, the box is inspired by the memory of enjoying a Happy Meal as a kid. The outside of the box is multicolored and features the chain’s familiar golden arches.
The first day you can get a Cactus Plant Flea Market Box will be Monday, Oct. 3. Pricing is set by individual restaurants and may vary, according to McDonald’s. It’ll be available in the drive-thru, in-restaurant, by delivery or on the McDonald’s app, while supplies last.
You can choose between a Big Mac or 10-piece Chicken McNuggets. It will also come with fries and a drink.
Now about those toys. The boxes will pack in one of four figurines. Three of the four appear to be artsy takes on the classic McDonald’s characters Grimace, Hamburglar and Birdie the Early Bird, while the fourth is a little yellow guy sporting a McDonald’s shirt called Cactus Buddy.
In other McD news, Halloween buckets could be returning to the chain this fall. So leave some room in your stomach for a return trip.
Why companies like iHeartMedia, NBCU rely on homegrown IP to build metaverse engagements
To avoid potential blowback from a skeptical audience, retailers as well as media and entertainment companies are learning to invest in their homegrown intellectual properties while building virtual brand activations inside Roblox or Fortnite.
Take, for instance, when they get it wrong.
Earlier this week, Walmart launched its own Roblox world — called Walmart Land — and was roundly mocked for it across social media given the announcement’s disjointed brand message and apparent lack of life. In one viral tweet, a Twitter user described a clip of Walmart CMO William White introducing the Roblox space as “one of the saddest videos ever created.”
To some extent, this sort of criticism is to be expected during the early days of the metaverse.
“Walmart is an iconic brand; when you see them coming into a platform like Roblox, people are going to be 10 times more critical of what is being launched,” said Yonatan Raz-Fridman, CEO of the Roblox developer studio Supersocial.
But Walmart’s size is not its only disadvantage as it dips its toes into Roblox. Although Walmart has a widely recognizable brand, it owns few intellectual properties that users are actually interested in experiencing virtually — a shortcoming reflected by the somewhat cavernous emptiness of Roblox’s Walmart Land.
The success of other recent brand activations is evidence that media and entertainment brands are better equipped to build metaverse spaces that can dodge online skepticism, thanks to their wealth of owned IP.
“They are having to reinvent themselves, to a certain degree, but that is in their DNA,” said Jesse Streb, global svp of technology and engineering at the agency DEPT. “So they have a unique advantage over, say, some kludgy company that sells lumber, or a construction company.”
For example, iHeartMedia’s Roblox and Fortnite spaces were inspired by the mass media corporation’s wealth of popular real-life events, such as the Jingle Ball Tour and iHeartRadio Music Festival, with virtual versions of musicians like Charlie Puth performing pre-recorded concerts that allow real-time audience interaction.
“There’s a strong brand association with the IP, down to a station level — you’re in the New York area, you probably know Z100,” said iHeartMedia evp of business development and partnerships Jess Jerrick. “The same is true for the event IP, or the IP that we now have in the podcasting space, and of course our radio broadcast talent. So there’s no shortage of really strong IP we can bring into these spaces.”
Translating real-life properties into the metaverse is also an enticing prospect for brands that view metaverse platforms as an experimental marketing channel, allowing them to bring tried-and-true IP into their virtual activations instead of designing them from the ground level. This was part of the strategy behind the recent Tonight Show activation in Fortnite Creative, which was designed in collaboration between NBCUniversal and Samsung. “We’re looking at it holistically — how do we find fans in new ways, and use IP that fans love in new ways?” said NBCU president of advertising and client partnerships Mark Markshall.
Since opening on Sept. 14, iHeartLand has already enticed over 1.5 million Roblox users to visit. The company aims to retain that attention with a schedule of virtual programming featuring popular musicians and personalities.
“At our core, we are essentially an influencer network; our broadcast talent are some of the most connected, most engaging influencers at work in media today,” said Conal Byrne, CEO of iHeart Digital Audio Group. “That gives us this sort of superpower, to be able to go into new-ish platforms, like Roblox or Fortnite, because we talk to our listeners through those influencers.”
Indian Crypto Exchange WazirX Lays Off 40% Of Its Staff Citing The Ongoing Crypto Winter: Report
Cardano’s Founder Charles Hoskinson Picks On Solana’s Recent Network Outage On Twitter
California fraud cases highlight the need for a regulatory crackdown on crypto
NFT space bridges passions for tennis legend Maria Sharapova
Bill Aims to Limit Crypto Mining in Kazakhstan Only to Registered Companies
‘Continue to ebb and flow over time’: Denny’s chief brand officer on how consumers’ moods inform brand messaging
Bitcoin hits $45K ahead of July inflation report, but one fractal hints at looming correction
Smart Marketing Token (SMT) Is on a Mission to Help Blockchain Projects Reach Their Goals
Identity management org Sailpoint unveils no-code tool
Japan crypto exchange bitbank upgrades performance of its matching engine by 4x
Bit Coin3 months ago
Analyst Says Duke Energy Corporation Is Studying Bitcoin Mining Applied to Demand Response
Bit Coin3 months ago
Brazil Creates Crypto-Dedicated Investigation Unit
Ethereum3 months ago
OKX Wins Provisional Crypto License In The UAE
Tech3 months ago
Kaseya, one year later: What have we learned?
Tech3 months ago
Best Prime Day deals: Last-minute deals you can shop today
Bit Coin3 months ago
Study: 14% of Saudis Are Crypto Investors, 76% Have Less Than One Year of Experience in Cryptocurrency Investment
Bit Coin3 months ago
Tothesmart Is an Exclusive New Smart Contract Built on the Binance Smart Chain Blockchain
Ethereum3 months ago
Binance.US taps Former Paypal Exec. as New CFO as The Crypto Exchange Formulates an IPO