President Joe Biden often seeks guidance on artificial intelligence from his science adviser, Arati Prabhakar, who directs the White House Office of Science and Technology Policy. With the support of major American tech companies like Amazon, Google, Microsoft, and Meta, Prabhakar is playing a vital role in shaping the U.S. approach to safeguarding AI technology.
As someone who has worked in both government and the private sector, she brings a unique perspective to the table. Leading up to the recent AI systems test at the DefCon hacker convention in Las Vegas, Prabhakar shared her insights with The Associated Press.
During their conversations about AI, President Biden demonstrates an admirable focus on understanding the technology and its applications. He delves into the potential consequences and implications, which sparks productive and exploratory discussions.
These conversations are centered on taking action. It is refreshing to see such engagement and a commitment to shaping the future of AI in a responsible manner.
Senate Majority Leader Chuck Schumer has highlighted the need for AI models to be explainable, emphasizing its importance as a priority. However, the technical nature of deep-learning and machine-learning systems makes them inherently opaque or like black boxes.
Although this might raise concerns, it is worth noting that many risks we encounter in life are not explainable. We have found ways to ensure the safety of pharmaceuticals despite not being able to predict all cell interactions in the human body. Similarly, in the journey of artificial intelligence, we can work towards understanding enough about these systems’ safety and effectiveness to utilize their value effectively.
While certain AI applications are evident causes for concern, others are more nuanced. Exploiting chatbots to obtain instructions for building weapons is an obvious and troubling example. Additionally, when these systems are trained using biased human data, they can perpetuate existing biases. We have seen disturbing cases where facial recognition systems led to wrongful arrests of Black individuals. Furthermore, privacy concerns arise when combining individual pieces of personal data, revealing comprehensive profiles of individuals.
In July, seven companies voluntarily committed to meeting AI safety standards established by the White House. It is fortunate that many leading AI technology companies are based in the United States, a testament to the country’s long-standing commitment to innovation. However, it is important to recognize that even with the best intentions, market realities can restrict the extent to which individual companies can act.
The aim is to encourage more companies to step up and make voluntary commitments, while acknowledging that these commitments alone are not sufficient. Government involvement is necessary, both within the executive and legislative branches.
Regarding future actions, various measures are currently being considered. While a specific timeline is not available, swift action is a priority. President Biden has unequivocally emphasized the urgent nature of this issue. As discussions progress, accountability measures for AI developers may be introduced to ensure responsible development and deployment of AI technologies.