Twenty years ago, nanotechnology was the synthetic intelligence of its time. Of course, the particular details of those technologies are a world apart. But the challenges of ensuring responsible and useful development of any technology are surprisingly similar. Nanotechnology, i.e. technologies at the extent of individual atoms and molecules, even harbors its own existential risk in the shape of “gray goosebumps”.
However, as potentially transformative AI-based technologies proceed to emerge and gain traction, it will not be clear whether people in the sphere of artificial intelligence are applying the teachings learned from nanotechnology.
As researchers of the longer term of innovation, we explore these parallels in a brand new commentary within the journal Nature Nanotechnology. The commentary also addresses how a scarcity of engagement in a various community of experts and interests jeopardizes the long-term success of AI.
Excitement and fear of nanotechnology
In the late Nineteen Nineties and early 2000s, nanotechnology transitioned from a radical and somewhat fringe idea to mainstream acceptance. The U.S. government and other governments around the globe increased their investments within the so-called “next industrial revolution.” Government experts made compelling arguments for the way, as a baseline report from the U.S. National Science and Technology Council put it, “shaping the world one atom at a time” would positively transform the economy, the environment and lives.
But there was an issue. Following public opposition to genetically modified crops, in addition to the findings of recombinant DNA and the Human Genome Project, there was growing concern amongst people within the nanotechnology field that an analogous backlash against nanotechnology could arise if it were mismanaged.
These concerns were well founded. In the early days of nanotechnology, nonprofit organizations similar to the ETC Group, Friends of the Earth, and others vigorously protested claims that this sort of technology was secure, that there have been minimal downsides, and that experts and developers knew what they were doing. This period saw public protests against nanotechnology and, worryingly, even a bombing campaign by environmental extremists targeting nanotechnology researchers.
Just like with today's AI, there have been concerns concerning the impact on jobs as a brand new wave of skills and automation displaced established profession paths. Foreshadowing current AI concerns, fears about existential risks also emerged, particularly the chance that self-replicating “nanobots” would convert all matter on Earth into copies of themselves, leading to a planet-wide “gray slime.” . This particular scenario was even highlighted by Bill Joy, co-founder of Sun Microsystems, in a distinguished article in Wired magazine.
However, most of the potential risks related to nanotechnology were less speculative. Just as there’s increasing focus today on more immediate risks related to AI, within the early 2000s the main target was on examining concrete challenges related to making sure the secure and responsible development of nanotechnology. These included potential health and environmental impacts, social and ethical issues, regulation and governance, and a growing need for collaboration between the general public and stakeholders.
The result was an especially complex landscape surrounding the event of nanotechnology that promised incredible advances but was fraught with uncertainty and the chance of losing public trust if something went fallacious.
How nanotechnology got it right
One of us – Andrew Maynard – was on the forefront of addressing the potential risks of the nanotechnology research project within the early 2000s as a researcher, co-chair of the Nanotechnology Environmental and Health Implications Interagency Working Group, and senior scientific advisor to the Woodrow Wilson International Center to recent technologies.
At the time, it felt like engaging within the responsible development of nanotechnology and addressing the health, environmental, social and governance challenges that this technology poses. For every solution there gave the impression to be a brand new problem.
But by collaborating with a broad range of experts and stakeholders—lots of whom weren’t experts in the sphere of nanotechnology but who brought critical perspectives and insights—the sphere produced initiatives that laid the muse for nanotechnology to thrive. These included multi-stakeholder partnerships, consensus standards and initiatives driven by global bodies similar to the Organization for Economic Co-operation and Development.
As a result, most of the technologies people depend on today are based on advances in nanoscience and engineering. Even a few of the advances in AI are based on nanotechnology-based hardware.
In the United States, much of this collaboration has been driven by the interagency National Nanotechnology Initiative. In the early 2000s, the initiative brought together representatives from across government to higher understand the risks and advantages of nanotechnology. It helped bring together a broad and diverse range of scientists, researchers, developers, practitioners, educators, activists, policymakers and other stakeholders to assist develop strategies to make sure socially and economically useful nanotechnologies.
In 2003, the twenty first Century Nanotechnology Research and Development Act was signed into law, further codifying this commitment to engaging a broad range of stakeholders. The coming years saw a growing variety of federally funded initiatives—including the Center for Nanotechnology and Society at Arizona State University (where one in all us was a member of the Board of Visitors)—that solidified the principle of broad engagement in emerging cutting-edge technologies .
Only experts on the table
These and similar efforts around the globe have been pivotal within the emergence of useful and responsible nanotechnology. But despite similar aspirations around AI, the identical diversity and commitment is missing. In comparison, the AI ​​development practiced today is far more exclusionary. The White House has prioritized consultations with CEOs of AI corporations, and Senate hearings have prioritized using technical experts.
Based on the findings from nanotechnology, we consider this approach is a mistake. While members of the general public, policymakers and experts outside the AI ​​field may not fully understand the technology's intimate details, they are sometimes quite able to understanding its implications. More importantly, they carry a diversity of experience and perspectives which are essential to the successful development of a complicated technology like AI.
For this reason, in our Commentary on Nature Nanotechnology, we recommend learning from the teachings of nanotechnology and interesting early and sometimes with experts and stakeholders who may not know the technical details and science behind AI, but still about Possess knowledge and insights essential to making sure the appropriateness of the technology's success.
The clock is ticking
Artificial intelligence might be probably the most transformative technology to exist in living memory. If cleverly developed, it could positively change the lives of billions of individuals. However, it will only succeed if society implements the teachings from previous technological transitions similar to nanotechnology.
As within the youth of nanotechnology, there’s an urgent need to deal with the challenges of AI. The beginnings of a complicated technology transition set the trail for the way it can play out in the approaching a long time. And given recent advances in AI, that window is closing quickly.
It’s not only the longer term of AI that’s at stake. Artificial intelligence is just one in all many transformative recent technologies. Quantum technologies, advanced genetic engineering, neurotechnologies and more are on the rise. If society doesn’t learn from the past to successfully navigate these upcoming transitions, it risks failing to deliver on its guarantees, and there’s a likelihood that every of them will do more harm than good.