The reach of generative artificial intelligence (AI) skyrocketed this year as tech companies raced to get ahead of each other while regulators and lawmakers looked to add guardrails.
As AI became more common and accessible — especially OpenAI’s popular ChatGPT chatbot — — it shaped society by posing new opportunities and risks for sectors ranging from education to Hollywood.
Here are five ways AI is rapidly changing our world.
ChatGPT kicks off tech race among companies
When OpenAI launched ChatGPT publicly in November 2022, it burst onto the scene. The chatbot reached 15.5 million visits in its first week and hit a peak of 1.8 billion visits in May, according to Similarweb data.
ChatGPT’s success also kicked off a race among tech companies.
Just one day before Microsoft said it would incorporate ChatGPT into Bing search in February, Google announced the launch of Bard, it’s own AI chatbot. As the companies continue to race against each other to stay on the cutting edge, Google announced this month the launch of Gemini, an AI model intended to help Bard become more humanlike in communication and eventually bolster the company’s search engine.
Google said will be incorporated into Bard, as well as its Pixel 8 Pro smartphone.
Other tech companies also jumped into the AI race. Meta, the parent company of Facebook and Instagram, announced in July that its open-sourced large language model, Llama 2, would be released for commercial use and research.
Tesla, SpaceX and X owner Elon Musk also launched a ChatGPT rival under his xAI company in November. The company said the AI, called Grok, had a “rebellious streak.” The company also boasted Grok’s “real-time access” to info from X, the platform formerly known as Twitter, as a “massive advantage” over other platforms.
Analysts at Wedbush said in a December memo they expect spending on AI to accelerate across the tech sector into 2024. The analysts said they view AI as the “most transformative technology trend since the start of the Internet in 1995.”
AI sends Congress scrambling
As companies sped ahead with AI advancements, Congress spent much of the second half of the year holding hearings and meeting with experts to understand the risks and benefits of the technology.
A Senate Judiciary subcommittee held a May hearing featuring testimony from OpenAI CEO Sam Altman. He pledged to work with government and the rest of the industry to minimize risks as the technology advances — warning that AI could “cause significant harm to the world.”
Altman returned to the Capitol in September along with other CEOs of leading AI companies, civil society leaders and other AI experts to meet with Senators in the first of a series of closed-door discussions as part of Senate Majority Leader Chuck Schumer’s (D-N.Y.) AI Insight Forums.
The first meeting included attendees such as Musk, Google CEO Sundar Pichai, Microsoft CEO Satya Nadella and Meta CEO Mark Zuckerberg.
After the inaugural meeting, the Senate held a series of other forums focused on different areas of AI’s impact, including election concerns, intellectual property issues, and national security risks.
The bipartisan group of senators leading the effort with Schumer includes Sens. Martin Heinrich (D-N.M.), Todd Young (R-Ind.) and Mike Rounds (R-S.D.). While they have touted the bipartisan agreement on many issues posed, lawmakers have not yet advanced a regulatory framework to address AI.
Hollywood sets new standard for AI labor protections
The advanced capabilities of AI also raised concerns about how the technology will impact jobs.
AI could impact roughly 80 percent of the U.S. workforce, according to a March paper from researchers at OpenAI, OpenResearch and the University of Pennsylvania. The paper argued around 80 percent of the workforce could have at least 10 percent of their work affected by the implementation of large language models in their industry.
Hollywood writers and actors led the way this year in securing protections from AI through union contracts.
The agreement reached between the Writers Guild of America and the Alliance of Motion Picture and Television Producers in October ended a 148-day strike and set in place AI protections that could set a precedent for other industries.
Part of the agreement included giving union members the right to choose to use AI, while prohibiting studios from forcing writers to use AI.
It also ensured that companies must disclose whether any materials given to a writer have been generated by AI or incorporated AI-generated material.
SAG-AFTRA released its own agreement with the studios in November that also included AI protections. The agreement requires the consent of a performer in order to create a digital replica. It also lays out how a performer will be compensated through the use of a replica, including time spent for creating the replica and how the replica is used.
AI raises election misinformation concerns
The rise of generative AI tools, especially audio and video technology that can appear to show people saying or doing things that did not occur, could fuel the spread of political misinformation heading into the 2024 elections.
Several campaigns, especially in the Republican presidential primary, have used AI in election content.
Those concerns have pushed Congress, the Federal Election Commission (FEC) and tech companies to consider ways to mitigate the spread of false information.
Google and Meta have put in place policies to require election advertisers disclose the use of AI in political ads.
The FEC is considering a rule clarification that would also address the use of AI in campaigns, after a push by the consumer advocacy group Public Citizen.
AI shakes up education
The launch of ChatGPT and its immediate popularity led schools to be concerned about how students may use the tools to cheat on schoolwork.
The concerns led to an initial wave of bans on the technology. However, many have since reversed those bans in search of ways to incorporate the technology into education.
For example, the New York City school system revoked its ban on ChatGPT in May. Chancellor David Banks wrote an op-ed published by Chalkbeat explaining the decision and how the schools will work to integrate AI into curriculum.
Along with that change, though, came another concern associated with AI in education — whether it will widen the tech divide for rural and poor students.
AI may also change education as a way of alleviating concerns about teacher shortages, which have been an increasing concern since the COVID-19 pandemic.