What’s the context?
ChatGPT, an artificial intelligence text generator, is being hailed as the future of work, but not everyone is convinced
- ChatGPT is an AI tool that can generate human-like text
- Technologists see this as a disruptor of many jobs
- Engineers and artists are pushing back against generative AI
LONDON – A new artificial intelligence model called ChatGPT is making waves as technologists promote it as a new tool that will revolutionise how people work.
The tool comes from OpenAI, a for-profit research organisation co-founded by Elon Musk and investor Sam Altman, and backed by billions in funding from Microsoft.
Some advocates predict ChatGPT will result in automating jobs away from human beings, although critics have argued that AI and robotics do not yet have the capability for that.
But already, some schools and universities have responded by limiting or banning its use by students.
Google owner Alphabet Inc announced on Monday it would launch its own chatbot service called Bard and more artificial intelligence for its search engine as well as developers.
Chinese tech giant Baidu Inc said on Tuesday it would complete internal testing of its own ChatGPT-style project called “Ernie” in March.
So what is ChatGPT, and is it coming for your job?
How does ChatGPT work?
ChatGPT is a large language model that can generate human-like text, trained on a massive dataset of written works from the internet.
The program uses a deep learning technique called “transformer architecture” to sift through several terabytes of data that contain billions of words to create answers to prompts or questions.
ChatGPT’s predecessors include GPT-3, which also generates text, but has been trained on a much larger set of data. While this makes GPT-3 more powerful, ChatGPT is faster, more capable of generating human-like answers, and is available to the public.
The bot works in the same way that AI generator tools like DALL-E 2, Midjourney, and Stable Diffusion do – but much like how those programs can create unrealistic images, ChatGPT is not always accurate.
What can ChatGPT do?
ChatGPT can write emails and essays, poetry, answer questions, or generate lines of code based on a prompt. This could be used to develop virtual assistants or quickly answer customer queries.
Content platform Jasper said about 80,000 clients have used its software to draft ads, emails, blogs and other material.
Marketing is one of the clearest businesses for today’s chatbots, said Gil Elbaz, co-founder of TenOneTen Ventures, a venture capital firm.
What are the limitations of ChatGPT?
ChatGPT is trained on statistical patterns and correlations and does not have an understanding of the input or output in the same way a human would.
“Large language models have limited reliability, limited understanding, limited range, and hence need human supervision”, Michael Osborne, a machine learning researcher from Oxford University, told Context.
AI tools can be prone to bias based on the data they were trained with, and a lack of transparency about that training makes it difficult to tell how the bot came to a conclusion.
The EU Agency for Fundamental Rights (FRA) has warned that algorithms based on bad data could cause harm and that safeguards should be in place to mitigating bias and discrimination.
This is exacerbated when the bot produces false information that it presents as factual, otherwise known as a hallucination.
Last year, Meta warned users that its Blenderbot 3 chatbot can make false or contradictory statements, misremember details, and “forget that they are a bot”.
Will ChatGPT steal jobs?
The jobs that AI tools like ChatGPT could disrupt include repetitive or routine tasks that can be easily automated, including data entry and processing, simple customer service roles, and certain kinds of content creation.
The World Economic Forum’s Future of Jobs Report estimated in 2020 that while 85 million jobs may be displaced by AI and robotics by 2025, another 97 million jobs may emerge from these changes.
However, this disruption will not be equal, with lower-wage workers, women and younger people more deeply impacted.
“ChatGPT is unlikely to put any creative professionals out of work any time soon … the tech isn’t yet fit for purpose,” Gina Neff, executive director of the Minderoo Centre for Technology and Democracy, told Context.
“AI is more likely to change what we do in our jobs, rather than eliminate lots of different jobs,” she said, adding that jobs will shift around new and emerging technologies without being disrupted completely.
What do creative people think of the threats posed by ChatGPT? Singer Nick Cave, responding to a song written by ChatGPT “in the style of Nick Cave”, dismissed it as a “grotesque mockery of what it is to be human”.
Why are AI companies being sued?
A group of visual artists last week sued AI companies Stability AI, Midjourney, and DeviantArt for copyright infringement, saying their work was used without permission to train AI tools.
Getty Images has also initiated legal proceedings against Stability AI for allegedly copying millions of its images.
A similar class action lawsuit was filed last year against Microsoft-owned GitHub for scraping code from the internet without permission, in order to train OpenAI’s tools.
While copyrighted material may have been inputted to train an AI, if the result is “transformative” then firms might be within their rights under fair use copyright law, said Nick White, an intellectual property and digital specialist at law firm Charles Russell Speechlys.
The law allows people to use copyrighted material to generate new content or to comment on it, such as in YouTube videos. However, areas like code are a legal grey area, as the AI does not provide attribution.
“My gut feeling is that the outcome of these cases will grant some protection to originators of copyright works,” White said.
“(But) there’s a whole spectrum of infringement from production of work that is identical to work that has no similarity whatsoever, and I think it is possible we will have many more cases.”
This article was updated on Feb. 7, 2023, to include the Google and Baidu announcements.
(Reporting by Adam Smith; Editing by Rina Chandran.)