AI: An Opportunity to Learn from the Past
The global narrative surrounding artificial intelligence (AI) is overwhelmingly optimistic. As the world stands on the precipice of another technological revolution, the impact of AI is predicted to rival that of the industrial revolution. Between 1760 and 1840, the industrial revolution ushered in an era of unprecedented growth and change. Despite its monumental contributions to societal advancement, it also widened economic disparities.
The recent announcement from President Biden echoes a sentiment of cautious optimism. By establishing rigorous guidelines and directives for AI development and usage, the Biden administration is signaling a commitment to avoiding the pitfalls of previous revolutions. But can we truly learn from our past?
Human Nature: A Double-Edged Sword
The primal instincts that have driven humanity since our days as hunter-gatherers can be both our greatest strength and our most glaring weakness. Throughout history, we've seldom seen a time of global peace. This inherent propensity for conflict underscores the need for stringent guidelines, especially when introducing transformative technologies.
The nuclear arms race of the 20th century provides a compelling case study. Less than five decades elapsed between the Wright brothers' first flight in 1903 and the devastating use of atomic bombs in 1945. Humanity needed just forty-two years to move from flight to dropping the bombs with the potential to end civilization. The world then recognized the potential for mutual destruction and responded with a series of treaties and agreements to prevent the worst-case scenario. The lesson here? When confronted with powerful new technologies, cooperative governance can avert catastrophe.
The Dual Nature of AI
While I applaud the Biden administration's proactive stance on AI, I believe that achieving a secure AI future cannot be separated from continued innovation. Deep learning (a series of feedback loops that utilize linear equations combined with non-linear functions to form continuous learning systems) is still in it's infancy and has been pivotal in the recent AI surge. The U.S. must champion this technological advancement, especially in light of rising global challenges.
However, the risks associated with AI, particularly in these deep neural networks, are not to be underestimated. A friend of mine (whom will stay anonymous) works for one of the largest technology companies on the planet. During a recent happy hour, she recounted her part in a total organizational effort to leverage an LLM at enterprise scale, she spoke about the year's worth of cross functional team efforts and testing. Despite those efforts, fifteen minutes after going live, someone was using the model to create lewd text and images. The current flexibility of AI systems demand comprehensive safeguards that we can only understand through the course of continuous building.
Towards an Equitable AI Future
As we inch closer to the realization of Artificial General Intelligence (AGI), questions about AI's foundational principles come to the forefront. Should AI entities like OpenAI, Meta, and Google be credited for basing their models on human history? This history encompasses the best and worst of humanity, from groundbreaking innovations to divisive echo chambers.
The emphasis on supporting the workforce is commendable but long overdue. The digital divide, which began widening in the 1980s, has yet to be bridged. It's noteworthy that the impending "AI gap" seems to be garnering more attention, possibly due to its potential impact on higher-tier jobs. The proactive approach to this new divide begs the question: Why was the digital divide not addressed with the same urgency?
In Conclusion
The Biden administration's focus on AI is timely and essential. As AI continues to evolve, it's crucial to strike a balance between innovation and security. By learning from past technological revolutions and human tendencies, we stand a chance at ensuring that AI benefits all of humanity, not just a select few.