It seems like just yesterday (though it’s been almost six months) since OpenAI launched ChatGPT and began making headlines.
ChatGPT reached 100 million users within three months, making it the fastest-growing application in decades. For comparison, it took TikTok nine months – and Instagram two and a half years – to reach the same milestone.
Now, ChatGPT can utilize GPT-4 along with internet browsing and plugins from brands like Expedia, Zapier, Zillow, and more to answer user prompts.
Big Tech companies like Microsoft have partnered with OpenAI to create AI-powered customer solutions. Google, Meta, and others are building their language models and AI products.
Over 27,000 people – including tech CEOs, professors, research scientists, and politicians – have signed a petition to pause AI development of systems more powerful than GPT-4.
Now, the question may not be whether the United States government should regulate AI – if it’s not already too late.
The following are recent developments in AI regulation and how they may affect the future of AI advancement.
Federal Agencies Commit To Fighting Bias
Four key U.S. federal agencies – the Consumer Financial Protection Bureau (CFPB), the Department of Justice’s Civil Rights Division (DOJ-CRD), the Equal Employment Opportunity Commission (EEOC), and the Federal Trade Commission (FTC) — issued a statement on the strong commitment to curbing bias and discrimination in automated systems and AI.
These agencies have underscored their intent to apply existing regulations to these emergent technologies to ensure they uphold the principles of fairness, equality, and justice.
- CFPB, responsible for consumer protection in the financial marketplace, reaffirmed that existing consumer financial laws apply to all technologies, irrespective of their complexity or novelty. The agency has been transparent in its stance that the innovative nature of AI technology cannot be used as a defense for violating these laws.
- DOJ-CRD, the agency tasked with safeguarding against discrimination in various facets of life, applies the Fair Housing Act to algorithm-based tenant screening services. This exemplifies how existing civil rights laws can be used to automate systems and AI.
- The EEOC, responsible for enforcing anti-discrimination laws in employment, issued guidance on how the Americans with Disabilities Act applies to AI and software used in making employment decisions.
- The FTC, which protects consumers from unfair business practices, expressed concern over the potential of AI tools to be inherently biased, inaccurate, or discriminatory. It has cautioned that deploying AI without adequate risk assessment or making unsubstantiated claims about AI could be seen as a violation of the FTC Act.
For example, the Center for Artificial Intelligence and Digital Policy has filed a complaint to the FTC about OpenAI’s release of GPT-4, a product that “is biased, deceptive, and a risk to privacy and public safety.”
Senator Questions AI Companies About Security And Misuse
U.S. Sen. Mark R. Warner sent letters to leading AI companies, including Anthropic, Apple, Google, Meta, Microsoft, Midjourney, and OpenAI.
In this letter, Warner expressed concerns about security considerations in the development and use of artificial intelligence (AI) systems. He requested the recipients of the letter to prioritize these security measures in their work.
Warner highlighted a number of AI-specific security risks, such as data supply chain issues, data poisoning attacks, adversarial examples, and the potential misuse or malicious use of AI systems. These concerns were set against the backdrop of AI’s increasing integration into various sectors of the economy, such as healthcare and finance, which underscore the need for security precautions.
The letter asked 16 questions about the measures taken to ensure AI security. It also implied the need for some level of regulation in the field to prevent harmful effects and ensure that AI does not advance without appropriate safeguards.
AI companies were asked to respond by May 26, 2023.
The White House Meets With AI Leaders
The Biden-Harris Administration announced initiatives to foster responsible innovation in artificial intelligence (AI), protect citizens’ rights, and ensure safety.
These measures align with the federal government’s drive to manage the risks and opportunities associated with AI.
The White House aims to put people and communities first, promoting AI innovation for the public good and protecting society, security, and the economy.
Top administration officials, including Vice President Kamala Harris, met with Alphabet, Anthropic, Microsoft, and OpenAI leaders to discuss this obligation and the need for responsible and ethical innovation.
Specifically, they discussed corporations’ obligation to ensure the safety of LLMs and AI products before public deployment.
New steps would ideally supplement extensive measures already taken by the administration to promote responsible innovation, such as the AI Bill of Rights, the AI Risk Management Framework, and plans for a National AI Research Resource.
Additional actions have been taken to protect users in the AI era, such as an executive order to eliminate bias in the design and use of new technologies, including AI.
The White House noted that the FTC, CFPB, EEOC, and DOJ-CRD have collectively committed to leveraging their legal authority to protect Americans from AI-related harm.
The administration also addressed national security concerns related to AI cybersecurity and biosecurity.
New initiatives include $140 million in National Science Foundation funding for seven National AI Research Institutes, public evaluations of existing generative AI systems, and new policy guidance from the Office of Management and Budget on using AI by the U.S. government.
The Oversight of AI Hearing Explores AI Regulation
Members of the Subcommittee on Privacy, Technology, and the Law held an Oversight of AI hearing with prominent members of the AI community to discuss AI regulation.
Approaching Regulation With Precision
Christina Montgomery, Chief Privacy and Trust Officer of IBM emphasized that while AI has significantly advanced and is now integral to both consumer and business spheres, the increased public attention it’s receiving requires careful assessment of potential societal impact, including bias and misuse.
She supported the government’s role in developing a robust regulatory framework, proposing IBM’s ‘precision regulation’ approach, which focuses on specific use-case rules rather than the technology itself, and outlined its main components.
Montgomery also acknowledged the challenges of generative AI systems, advocating for a risk-based regulatory approach that doesn’t hinder innovation. She underscored businesses’ crucial role in deploying AI responsibly, detailing IBM’s governance practices and the necessity of an AI Ethics Board in all companies involved with AI.
Addressing Potential Economic Effects Of GPT-4 And Beyond
Sam Altman, CEO of OpenAI, outlined the company’s deep commitment to safety, cybersecurity, and the ethical implications of its AI technologies.
According to Altman, the firm conducts relentless internal and third-party penetration testing and regular audits of its security controls. OpenAI, he added, is also pioneering new strategies for strengthening its AI systems against emerging cyber threats.
Altman appeared to be particularly concerned about the economic effects of AI on the labor market, as ChatGPT could automate some jobs away. Under Altman’s leadership, OpenAI is working with economists and the U.S. government to assess these impacts and devise policies to mitigate potential harm.
Altman mentioned their proactive efforts in researching policy tools and supporting programs like Worldcoin that could soften the blow of technological disruption in the future, such as modernizing unemployment benefits and creating worker assistance programs. (A fund in Italy, meanwhile, recently reserved 30 million euros to invest in services for workers most at risk of displacement from AI.)
Altman emphasized the need for effective AI regulation and pledged OpenAI’s continued support in aiding policymakers. The company’s goal, Altman affirmed, is to assist in formulating regulations that both stimulate safety and allow broad access to the benefits of AI.
He stressed the importance of collective participation from various stakeholders, global regulatory strategies, and international collaboration for ensuring AI technology’s safe and beneficial evolution.
Exploring The Potential For AI Harm
Gary Marcus, Professor of Psychology and Neural Science at NYU, voiced his mounting concerns over the potential misuse of AI, particularly powerful and influential language models like GPT-4.
He illustrated his concern by showcasing how he and a software engineer manipulated the system to concoct an entirely fictitious narrative about aliens controlling the US Senate.
This illustrative scenario exposed the danger of AI systems convincingly fabricating stories, raising alarm about the potential for such technology to be used in malicious activities – such as election interference or market manipulation.
Marcus highlighted the inherent unreliability of current AI systems, which can lead to serious societal consequences, from promoting baseless accusations to providing potentially harmful advice.
An example was an open-source chatbot appearing to influence a person’s decision to take their own life.
Marcus also pointed out the advent of ‘datocracy,’ where AI can subtly shape opinions, possibly surpassing the influence of social media. Another alarming development he brought to attention was the rapid release of AI extensions, like OpenAI’s ChatGPT plugins and the ensuing AutoGPT, which have direct internet access, code-writing capability, and enhanced automation powers, potentially escalating security concerns.
Marcus closed his testimony with a call for tighter collaboration between independent scientists, tech companies, and governments to ensure AI technology’s safety and responsible use. He warned that while AI presents unprecedented opportunities, the lack of adequate regulation, corporate irresponsibility, and inherent unreliability might lead us into a “perfect storm.”
Can We Regulate AI?
As AI technologies push boundaries, calls for regulation will continue to mount.
In a climate where Big Tech partnerships are on the rise and applications are expanding, it rings an alarm bell: Is it too late to regulate AI?
Federal agencies, the White House, and members of Congress will have to continue investigating the urgent, complex, and potentially risky landscape of AI while ensuring promising AI advancements continue and Big Tech competition isn’t regulated entirely out of the market.
Featured image: Katherine Welles/Shutterstock
!function(f,b,e,v,n,t,s) {if(f.fbq)return;n=f.fbq=function(){n.callMethod? n.callMethod.apply(n,arguments):n.queue.push(arguments)}; if(!f._fbq)f._fbq=n;n.push=n;n.loaded=!0;n.version='2.0'; n.queue=[];t=b.createElement(e);t.async=!0; t.src=v;s=b.getElementsByTagName(e)[0]; s.parentNode.insertBefore(t,s)}(window,document,'script', 'https://connect.facebook.net/en_US/fbevents.js');
if( typeof sopp !== "undefined" && sopp === 'yes' ){ fbq('dataProcessingOptions', ['LDU'], 1, 1000); }else{ fbq('dataProcessingOptions', []); }
fbq('init', '1321385257908563');
fbq('track', 'PageView');
fbq('trackSingle', '1321385257908563', 'ViewContent', { content_name: 'ai-regulation-united-states', content_category: 'generative-ai news' }); } });