Key tools, technologies, and terms

GenAI builds on a specific set of technologies, though the ecosystem around them grows every day. Regardless of what your GenAI use case is, there are a few key technologies that any GenAI stack will touch on. But first, let’s talk about two central techs that all others build on.


Almost all machine learning technologies use Python as their primary programming language. GenAI and machine learning grew out of data science, and data science has used Python for years because it’s readable, open-source, and as a scripting language, does not require compilation to test changes. Data scientists don’t always have expert programming skills, so a language that’s easy to understand, abstracts away a lot of details, and allows quick experiments gained traction with them.

Because Python has been central to data science for so long, a massive ecosystem has arisen around it. While not everything can be written in Python, it is a language that wraps very nicely around other faster languages like C.

Hardware accelerators

For decades, the primary computing engine of most computers has been the CPU. This is a general purpose serial computing unit that handles a handful of operations at once and uses a memory cache to store interim computations. Hardware accelerators like GPUs and TPUs (tensor processing units) can process thousands of small computations in parallel. These grew out of 3D graphics, which calculate multiple points in space and light sources in order to render an image, but found new life in machine learning and AI, which need to calculate thousands of weights and biases in parallel.

Neural networks

Neural networks are the basis for most GenAI models. There are several different types that you will encounter when considering GenAI:

Large language models (LLMs)

These models process and understand natural language, with the ability to specialize in certain domains or tasks.

Generative adversarial networks (GANs)

Used for generating synthetic data, GANs are effective in tasks like image and video synthesis, as well as text-to-image applications.

Variational auto-encoders (VAEs)

VAEs excel at generating new data, applicable in image/video synthesis and music generation.

Transformer-based LLMs

A subset of LLMs, these models focus on advanced natural language processing, customizable for specific domains.

Multimodal models

These models handle and generate data across various modalities, including text, image, and audio.

Other technologies

Machine learning frameworks

Open-source Python libraries like PyTorch and TensorFlow make training and fine-tuning ML models more accessible and standardized. The complex math of these models can be intimidating to implement, but these frameworks simplify the process for everyone.

Data lakehouses

GenAI relies on large amounts of data, whether for training, fine-tuning, or semantic search. This data is often stored in data lakehouses, which combine the structured reliability and low latency of data warehouses and the cost efficiency of a data lake. AI processes can access the same data that your business intelligence and analytics processes do, allowing you to unlock greater insights and features in your product.

Vector databases

LLMs turn text into vectors—a series of coordinates—that turn language into a map. Those vectors need to be stored somewhere. Vector databases store these large (sometimes over one thousand parameters) arrays efficiently and enable fast search through cosine distance algorithms. Vector databases are only needed for retrieval-augmented generation and semantic search applications.

More key terms

Generative AI (GenAI)

An AI system that can create original responses to user prompts.

Large language model (LLM)

A GenAI system trained on a massive amount of text data that can respond with text.


For GenAI, when an LLM invents facts, citations, and other information that does not reflect reality.

Model drift

This occurs when LLMs lose predictive power because of changes in the real world or relationships between data.


Updating an LLM with new information and changing the model’s weights and biases after it has been deployed to production.


The ability for systems to explain how a machine learning model arrived at a response. This is not always easy, as ML models do not operate in entirely predictable ways.


Removing biased concepts from models after training; Biases can be any unfair, subjective statements about demographic groups and can be introduced by training data.

Semantic vector(s)

A representation of the meaning of a piece of text as an array of numbers so that it can be represented in a searchable N-dimensional space.


These are just a few of the high-level tools and technologies that power GenAI. Any specific GenAI program will likely use these, but will also use a wide range of other techniques and technologies, including retrieval-augmented generation, monitoring, and more.