The Gist

  • Back to the '40s? Generative AI has a long history with notable milestones from the 1940s to present, with significant contributions from researchers like Claude Shannon, Alan Turing and more recent advances from OpenAI.
  • Far-reaching AI touch. The technology has widespread applications in various industries such as healthcare, finance, and entertainment, and is predicted to boost global GDP by 7% over the next 10 years.
  • Goodbye, human tasks. While generative AI may disrupt the labor market and affect millions of full-time workers, experts believe it will replace tasks rather than entire jobs, potentially leading to long-term changes in work, education and entertainment.

Generative AI, also known as GenAI or AGI (artificial general intelligence), has been around for decades but has only recently gained widespread attention. Marketers and customer experience professionals have leveraged the technology for more efficient campaigns, content-building and data-analysis projects. This societal uproar is due in part to recent advances in deep learning, which have made generative AI techniques more powerful and efficient.

Let’s take a look at the history of generative AI, from its early days to the present. 

Generative AI Timeline: Most Notable Moments 

Deep learning algorithms are becoming more powerful and efficient, and people are applying them to a wider range of problems. Generative AI is already used in a variety of industries, including healthcare, finance and entertainment.

For example, generative AI tools help create realistic images and videos for use in movies, television shows and video games. Health providers can also use the technology to create realistic medical images for use in diagnosis and treatment. 

Today, most people associate generative AI with OpenAI, the company behind ChatGPT and DALL-E, or other tech bigwigs like Microsoft and Alphabet. The technology’s timeline, however, spans decades and organizations. 

Some of the most notable milestones in the development of generative AI include: 

1940s – 1950s: The Birth of Artificial Intelligence

  • 1948: Claude Shannon publishes his paper “A Mathematical Theory of Communications,” which references the idea of n-grams. Shannon’s work focuses on the question: Given a sequence of letters (a sentence, for example), what’s the likelihood of the next letter? 
  • 1950: Alan Turing publishes his paper “Computing Machinery and Intelligence,” which introduces the Turing Test, a test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.

Alan Turing in 1930

Alan Turing, Circa 1930

  • 1952: A.L. Hodgkin and A.F. Huxley develop a mathematical model that shows how the brain uses neurons to form an electrical network, which later inspires artificial intelligence, natural language processing and more. 
  • 1956: The Dartmouth Summer Research Project on Artificial Intelligence, considered the birth of AI, brings together more than 100 researchers from a variety of disciplines, including computer science, linguistics and philosophy, to discuss the possibility of creating machines that can think.
  • 1956: Arthur Samuel builds one of the first examples of AI as search on the IBM 701 Electronic Data Processing Machine with his checkers program, which utilized an optimization process to search trees called “alpha-beta pruning.” Samuel also implemented a “reward” for a specific move, allowing the application to learn every time it played a game. 
  • 1957: Noam Chomsky releases “Syntactic Structures,” a book that lays out a style of grammar called “Phase–Structure Grammar,” which translates natural language (or human) sentences into a format that computers can understand and use. 

1960s – 1970s: The World's First Chatbot

  • 1961: Marvin Minsky publishes his paper “Steps Toward Artificial Intelligence,” which introduces the idea of a “society of machines” that can cooperate with each other to solve problems. 
  • 1964: The US National Research Council (NRC) establishes the Automatic Language Processing Advisory Committee (ALPAC), a group of seven scientists led by chairman John R. Pierce. The committee oversees the progress of NLP research. 
  • 1964-1966: Joseph Weizenbaum develops the first chatbot, ELIZA, at the MIT Artificial Intelligence Laboratory. ELIZA can simulate a conversation with a human by using a simple algorithm to generate text responses to questions.

Eliza, considered to be the first chatbot, in action.

  • 1966: ALPAC gains notoriety when it releases its infamous report showing skepticism toward natural language processing research. It emphasized the need for a more basic understanding of computational linguistics. The NRC and ALPAC halt funding for natural language processing and machine translation research, stalling innovation for decades.

1980s – 1990s: Neural Networks Identify Patterns

  • 1980s: Research in natural language processing, artificial intelligence and machine translation begins to recover. During this time, IBM develops several statistical models that use machine learning to make probability-based decisions.  
  • 1982: John Hopfield develops the Hopfield network, a recurrent neural network (RNN) that can learn and remember patterns. These networks provide a model for understanding human memory. 
  • 1997: Sepp Hochreiter and Jürgen Schmidhuber introduce the idea of long short-term memory (LSTM), which utilize RNNmodels, to the scientific community. These neural networks allow computer programs to identify patterns and solve common problems. 

2000s – 2010s: Hey, Siri, What's Artificial Intelligence?

  • 2003: Yoshua Bengio and his team develop the first feed-forward neural network language model, which predicts the next word when given a sequence of words. 
  • 2011: Apple brings AI and NLP assistants to the masses by releasing its first iPhone with Siri. The digital voice assistant used predefined commands to perform actions and answer questions. 

iPhone 4s, first iPhone with Siri integration

Learning Opportunities

  • 2013: A group of Google researchers led by Tomas Mikolov create Word2vec, a technique for natural language processing that uses a neural network to learn word associations from a large set of text. It can then suggest additional words to complete partial sentences and detect synonymous words. 
  • 2014: Ian Goodfellow develops the first generative adversarial network (GAN), a class of machine learning frameworks that can generate new data based on a given training set. For example, a GAN trained on photos can create new photos that look authentic to humans (if you don’t look too closely). 
  • 2015: Dzmitry Bahdanau and his team introduce the attention model. This mechanism solves the problem with traditional architectures that have to remember an entire input sentence before translation — meaning longer sentences can deteriorate performance. Instead, the attention model focuses only on the words that best help it formulate an output. 
  • 2017: A team of Google researchers led by Ashish Vaswani propose a new simple network architecture, the Transformer, based solely on attention mechanisms and doing away with recurrent neural networks. 
  • 2018: Alec Radford’s paper on generative pre-training (GPT) of a language model is republished on OpenAI’s website, showing how a generative language model can acquire knowledge and process dependencies unsupervised based on pre-training on a large and diverse set of data. 
  • 2019: OpenAI releases the complete version of its GPT-2 language model, which was trained on a dataset of more than nine mission documents — including text from URLs shared in Reddit posts with at least three upvotes. 

2020s: ChatGPT, The Fastest-Growing AI Chatbot

  • 2022: Startup company Stability AI develops Stable Diffusion, a deep learning text-to-image model that generates images based on text descriptions. This leads to the rise of other diffusion-based image services, such as DALL-E and Midjourney. 
  • 2022: ChatGPT releases GPT-3.5, an AI tool that reached one million users within five days. The tool can access data from the web from up to 2021.

ChatGPT Hits 1 Million Users Chart

  • 2023: The generative AI arms race begins. Microsoft integrates ChatGPT technology into Bing, a feature now available to all users. Google releases its own generative AI chatbot, Bard. And OpenAI releases yet another version of their bot, GPT-4, along with a paid “premium” option. 
  • 2023: OpenAI offers a beta version of its browser extension for ChatGPT, which has potentially unbounded access to current data on the web — something no other generative AI tool currently offers.

The beta version of ChatGPT's browser plugin

  • 2023: The US Copyright Office launches a new initiative to examine AI-generated content and potentially map out guidelines, which could mean the rise of government involvement in AI. 

The list above is not exhaustive. Many people and discoveries went into shaping generative AI as we know it today. It hopefully, however, sheds light on key moments in the tech’s long history. 

The Future of Generative AI

Generative AI technology isn’t going anywhere, but its place in our world isn’t entirely clear yet.

It offers a lot of promise for a wide range of industries — healthcare, finance, manufacturing, business, education, media and entertainment. A report from Goldman Sachs predicted the technology could boost annual global gross domestic product (GDP) by 7% over the next 10 years.

It will also mean a shift in the status quo. That same report found that, if generative AI lives up to its promises, it could significantly disrupt the labor market and affect approximately 300 million full-time workers.

Most experts seem to agree that the tech in its current state won’t fully replace workers — only tasks. However, the space seems to be evolving rapidly, and long-term changes to how we work, learn, entertain ourselves and more could be on the horizon.