Nack

Generative AI Glossary: Understanding Key Terminology

Cover Image for Generative AI Glossary: Understanding Key Terminology
Richard Awoyemi
Richard Awoyemi

The world of AI is changing faster than you can imagine. With new technologies emerging constantly, keeping up with the latest AI buzzwords can take a lot of work. That's why we've created this handy Generative AI glossary - your one-stop guide to understanding the exponential world of generative AI. Think of it as your personal Generative AI cheat sheet! 🤓🤫

Whether you're a seasoned ML engineer or just AI-curious, you'll find simple explanations for everything from GANs to transformers here. Consider this your primer to the AI revolution, neatly packaged into bite-sized AI keyword definitions you can reference anytime. We aim to demystify even the most complex concepts so that you can engage with new developments from a place of knowledge.

With this glossary by your side, you can dive headfirst into the creative potential of tools like Nack, ChatGPT, Bard, Stable Diffusion, Midjourney and more. Understanding these systems is the first step to harnessing their power - power accessible to all with Nack's user-friendly mobile app. So read on to level up your AI literacy and unlock new dimensions of possibility! The future is generative.

Generative AI Glossary and Key Terminology

Alignment

AI alignment refers to developing AI systems whose goals and incentives are aligned with human values. Poor alignment could lead generative models to produce harmful, biased or misinformative outputs. Approaches to alignment include value learning, objective steering, and oversight from humans in the loop. AI safety and alignment research advances are needed to ensure generative models benefit humanity as their capabilities grow more powerful.

OpenAI has launched a Superalignment initiative to improve AI alignment.

Artificial Intelligence

Artificial intelligence (AI) refers to computer systems designed to perform tasks that would otherwise require human intelligence. AI encompasses a variety of techniques that enable machines to learn from data and experience to carry out complex tasks, reason, make predictions and decisions, perceive their environment, and interact with humans naturally. There's a lot of fear that AI will replace humans. Still, AI aims to develop intelligent machines that augment human capabilities, not remove them from the picture. Key focus areas of AI research include machine learning, computer vision, natural language processing, robotics, and generative modelling.

Bias

Bias refers to systemic errors in AI systems that can lead to discriminatory decisions and unfair outcomes. Bias can be introduced via flawed training data, poor model architecture, or a lack of model transparency. For example, facial recognition systems trained on datasets lacking racial diversity may exhibit racial bias. Likewise, language models like Claude or GPT4 (both available in Nack) could show bias in thought if they still need to be trained on data from various opinions, ideas, cultures, etc. Mitigating bias requires testing models for fairness, auditing training data, and using techniques like data augmentation. Removing bias is critical for building equitable, ethical AI.

OpenAI has actually launched a Superalignment initiative in an attempt to address this.

Embeddings

Embeddings represent data like words or images as numeric vectors capturing semantic meaning. Generative models learn embeddings that cluster similar data points together in the vector space. Good embeddings enable models to make connections between related inputs and outputs. Techniques like contrastive learning further enhance the quality of embeddings for strong generative abilities.

Evaluations

Evaluating generative AI systems involves assessing how accurate, coherent, novel, and valuable their outputs are. Both automatic metrics like BLEU, as well as human evaluations are used. However, properly evaluating generative AI remains challenging due to the subjective nature of creative tasks. Setting up rigorous test datasets and making side-by-side comparisons of outputs is essential for benchmarking progress.

Failing to correctly evaluate generative AI models usually misinforms people about the progress and failings of models, reducing consumer confidence.

💡
We've recently written about it here: Is 'GPT4 Getting Dumber' or our Evaluation Criteria?

Fine-Tuning

Mechanic finetuning a car, engine open, hard work, coloured pencil Line art sketch, harsh lighting, vector, stained glass art, trending on Pixiv fan box, palette knife and brush strokes, style of makoto shinkai jamie wyeth James Gilleard edward hopper Greg Rutkowski studio ghibli genshin impact, Very sharp details.

Fine-tuning takes a pre-trained generative model like Stable Diffusion or LLama 2 and adapts it to a specific task, style or dataset. It involves additional training on a smaller dataset relevant to the task. Fine-tuning allows customising generative models for specific applications ranging from content creation to search to code generation. Proper fine-tuning is vital to realising the full capabilities of large, general-purpose generative models.

👋
At Nack, you can access several fine-tuned image generators directly on your mobile, e.g. for realistic people or a 3D animation style. Get started today!

GANs

Generative adversarial networks (GANs) are a generative model in which two neural networks compete against each other to improve their tasks. One generates synthetic data from random noise, while the other evaluates how realistic the generated data is. The generator tries to fool the evaluator over successive rounds of training. GANs can produce highly quality synthetic data for applications like image generation, video creation, and drug discovery.

Generative AI

Generative AI is artificial intelligence systems that create new content like text, images, audio, and video autonomously. Unlike analytical AI, which finds patterns in data, generative AI models data to build their own representations, allowing them to generate brand new, realistic, and often highly customised outputs. Prominent examples include Midjourney for image generation, ChatGPT for text generation, and tools like Nack for multiple functions (e.g., images, text, speech recognition, upscaling, etc.). The rise of generative AI promises to revolutionise content creation across many industries. Recent advancements in computer processing power and the availability of large datasets have enabled significant breakthroughs in Generative AI.

GPT

GPT stands for Generative Pretrained Transformer and refers to a series of natural language processing models developed by Anthropic. GPT models are trained on vast text corpora to generate human-like writing. GPT-3, released in 2020, showcased an unprecedented ability to produce cohesive, customised text given a prompt. Subsequent GPT models like GPT-3.5 aim to improve capabilities while avoiding potential harm through safety-focused design. Models like GPT-4 further improve areas like translations, coding, the ability to follow instructions, etc. GPTs demonstrate the generative capabilities of large language models.

👋
You can access all the latest and greatest GPT models on your mobile device via Nack!

Hallucination

chibi cute mini spiderman flying in the sky, clouds, trippy, fun, happy, carefree, coloured pencil Line art sketch, harsh lighting, vector, stained glass art, trending on Pixiv fan box, palette knife and brush strokes, style of makoto shinkai jamie wyeth James Gilleard edward hopper Greg Rutkowski studio ghibli genshin impact, Very sharp details.

Hallucination refers to outputs generated by AI systems that are entirely fabricated rather than grounded in reality. For instance, a text generator hallucinates fictional events as fact. Hallucination stems from the model not having sufficient training data to learn the true distribution of real data. Alternatively, the component parts required to get the correct answer are present within the model's data. In this case, the model must be encouraged to look in the right places (e.g. via prompt engineering techniques like Self-Ask). Extensive training on diverse, high-quality data and techniques like grounding-generated text can help reduce hallucinations.

Hallucinations are common when asking about information after the cut-off date for the training data for the model. At Nack, we've incorporated internet access to the models so that when links are referenced, the models can access up-to-date information - to help reduce the hallucinations and improve the models' usefulness.

The term isn't liked by many because it anthropomorphises AI, treating it as though it "thinks" like a human. That said, it's still widely used both in academia and mainstream communication.

LLM

Large language models (LLMs) refer to deep learning NLP models trained on massive text datasets, enabling them to generate human-like text. Examples include GPT-3 and Google's LaMDA. GPT4, Claude-2, and even open-source models like LLama 2 are other giant steps forward for LLMs.

"GPT-4 is the latest version of Generative Pre-trained Transformers, a type of deep learning model used for natural language processing and text generation. It marks a significant milestone in the field of artificial intelligence, particularly in natural language processing." - Source - Data Camp

The key to LLMs' generative abilities is their gigantic training data size - some containing over 100 billion parameters. However, now concerns are shifting from the size of the training datasets to the quality and diversity of data. As these models are used more widely, concerns about bias, safety, and compute costs are becoming more prevalent.

Nack gives access to all the best language models directly on your mobile device - supercharged by many other features.

Machine Learning

cute chibi robots studying in a classroom, lesson, teacher, books, learning, fun, happy, carefree, coloured pencil Line art sketch, harsh lighting, vector, stained glass art, trending on Pixiv fan box, palette knife and brush strokes, style of makoto shinkai jamie wyeth James Gilleard edward hopper Greg Rutkowski studio ghibli genshin impact, Very sharp details.

Machine learning is a subset of AI focused on building systems that can learn and improve from data without explicit programming. Algorithms enabling this include neural networks, regression, clustering, reinforcement learning and more. Modern machine learning, especially deep learning, underpins the most recent advances in generative AI by learning complex mappings between inputs like text or images and outputs.

Multi-Modal

Multi-modal AI refers to models that can process and relate data from multiple modalities like text, images, audio, video, etc. Large multi-modal models attempt to build unified representations across data types, allowing for rich generative applications. For instance, generating images from text descriptions or vice versa. Training data diversity and model architecture advances enable more sophisticated multi-modal generative AI.

Tools like Nack for multiple functions (e.g., images, text, speech recognition, upscaling, etc.) from a mobile platform.

Neural Networks

Neural networks are computing systems modelled on the human brain's network of neurons. They are a core part of modern AI and are well-suited for generative applications. Neural nets consist of input and output layers and many hidden layers in between containing neurons with weighted connections. Training on large datasets allows neural nets to learn complex mappings between inputs and outputs. This enables them to generate new results given inputs they have not seen before. Different neural net architectures like GANs and Transformers have driven advances in generative AI.

Prompt

Prompts provide context to prime generative AI systems towards desired outputs. A prompt may be a text description, an image, or even samples of desired audio or video. Carefully engineered prompts guide the creative process while leveraging the model's knowledge. Prompt programming unlocks more precise control over open-ended generative systems.

Prompt Engineering

Prompt engineering refers to the art of designing effective prompts to provide as inputs to generative language models like GPT-4 to produce desired outputs. Factors like prompt content, length, formatting, priming, and framing all influence output quality. Prompt engineering allows using generative models more precisely for creative and practical applications. As the discipline expands, it's also applied more broadly to the intersection between AI and software engineering. It tends also includes broader architectural decisions when building AI-powered systems.

RLHF

Reinforcement learning from human feedback (RLHF) trains AI systems by having them generate options and refine behaviour based on human judgment. Unlike static datasets, RLHF allows customising generative models interactively for users' needs. It is a promising technique for improving alignment, mitigating bias, and capturing nuanced preferences. Advances in computational efficiency are making RLHF systems more practical.

Supervised Learning

Supervised learning is a machine learning approach involving training models on labelled datasets containing input-output pairs. Models learn to produce correct outputs by minimising errors in prediction. Supervised learning enables generative models to produce conditioned results, e.g. generating images of specific types. Proper labelling of training data is crucial for supervised learning.

Transformer

Transformers are a neural network architecture well-suited to modelling language and other sequential data. They process entire sequences in parallel to learn contextual representations. Generative pre-trained transformers like GPT-4 exhibit strong generative abilities when fine-tuned on downstream tasks. The Transformer design underpins many recent generative AI breakthroughs in images, video, and audio.

Training Data

Training data refers to the datasets that train generative AI models like neural networks. The quality and size of training datasets are critical to how well the models can generate sensible outputs. For example, text generation models are trained on large volumes of natural language data to learn the structure and logic behind human writing and conversation. Image generation models require diverse photo datasets spanning many scenarios. High-quality, large-scale training data is essential for building generative AI that outputs creative, varied, and realistic content.

Unsupervised Learning

Unsupervised learning involves training AI models to find patterns and structures in unlabeled data. Models learn to represent data well internally through objectives like reconstruction. Generative adversarial networks which pit two neural nets against each other demonstrate unsupervised learning. Since labelling data is expensive, unsupervised techniques enable leveraging abundant raw data for generative tasks.

Conclusion

And that wraps up our glossary of key terms for generative AI! Hopefully, this helps you to feel more comfortable with generative AI terminology. With these definitions under your belt, you're ready to confidently explore the endless potential of systems like Nack, Midjourney, Stable Diffusion, ChatGPT, Claude, and more. Whether you want to create art, generate content, or understand the latest innovations, this guide has armed you with the needed knowledge.

Now go forth and engage with AI on the cutting edge! And if you're looking for an easy way to interact with all the top models in one place, be sure to check out Nack. Our ever-expanding app makes generative AI accessible and user-friendly, so you can focus on creating. The future is yours to imagine - and generate.

Download Nack to enjoy the power of AI from the comfort of your phone.Also, follow us on our social channels to stay updated with AI news, content and product updates.