The Ultimate SEO and Digital Marketing Resource Network

Skyrocket your SEO strategy with LinkGraph's expert resources. Browse our content to stay ahead of the curve, drive business growth, and crush your SEO goals.

Free Consultation
Hero Image
What do you want to know?

LLM vs. Generative AI: What Are Their Differences?

By Manick Bhan on Aug 18, 2025 - 26 minute read

Large Language Models (LLM) vs. Generative AI defines two distinct paths in artificial intelligence. LLMs process language, craft fluent responses, and interpret complex queries. Generative AI constructs […]

Large Language Models (LLM) vs. Generative AI defines two distinct paths in artificial intelligence. LLMs process language, craft fluent responses, and interpret complex queries. Generative AI constructs images, audio, and video that expand creative possibilities. Understanding LLM and Generative AI reveals different techniques and strengths that shape their real-world impact.

LLMs analyze large text datasets to recognize patterns, predict sequences, and produce human-like outputs. Generative AI explores diverse data types to design realistic content across media formats. LLMs advance research and decision-making with accurate, structured information. Generative AI empowers design, entertainment, and marketing with new visual assets. Both technologies enhance user experiences through personalized responses and original content.

LLMs face issues like bias, hallucinations, and high computational costs. Generative AI contends with deepfakes, ownership concerns, and unpredictable output. LLMs grow more trustworthy by improving training processes and model transparency. Generative AI builds more consistent results as tools evolve. Together, LLMs and generative AI influence how industries innovate and shape the future of intelligent systems.

What is a Large Language Model (LLM)?

A large language model (LLM) is an advanced Generative AI model trained on huge volumes of textual data to recognize linguistic patterns and predict the most probable word or phrase in a sequence. LLMs process and generate human-like responses by utilizing the relationships they have learned between words, concepts, and contexts. These models form the foundation for modern conversational agents, automated writing tools, and enhanced search experiences.

LLMs draw knowledge from diverse sources such as articles, websites, and books, which allows them to tackle a broad range of tasks. LLMs help with text generation, question answering, long content summarization, and translation between languages. The size and scope of an LLM give it the flexibility to support new tasks without additional fine-tuning, making it a powerful, adaptable tool in many domains.

How Does an LLM Work?

LLMs transform input text into meaningful output by performing a series of computational processes. Computational processes in LLMs rely on deep learning principles and statistical language modeling to produce accurate, fluent, and context-aware results. 

There are 6 stages in the process of an LLM. The 6 stages of an LLM process are given below.

  1. Data Collection: LLMs train on vast and diverse datasets that include books, articles, websites, and other written material to capture patterns across multiple domains.
  2. Tokenization: Input text is split into smaller chunks called tokens (often subwords) that help the model process language efficiently and recognize relationships.
  3. Training: LLMs use deep neural networks, usually transformer architectures, to analyze these tokens and adjust millions or even billions of parameters during training to improve their predictive accuracy.
  4. Context Processing: LLMs incorporate attention mechanisms that allow them to focus on relevant sections of the input text. LLMs model complex dependencies between words and concepts.
  5. Prediction: Once trained, an LLM calculates the most probable next token based on learned patterns and context. The LLM assembles coherent sentences or paragraphs that match the intent of the prompt.
  6. Output Generation: LLMs repeat this prediction process iteratively until they produce a complete and contextually appropriate output that fulfills the request of the user.

What Are Some Examples of LLM Models?

There are different examples of large language models. The examples of large language models are listed below.

  • GPT-4 by OpenAI: A powerful language model that generates highly fluent, context-aware responses. GPT-4 supports a broad range of tasks, which include writing, coding, summarization, and question answering. Its advanced reasoning and multimodal capabilities help process both text and images.
  • PaLM: Built to understand and produce nuanced content across multiple languages, PaLM excels at tasks like translation, summarization, coding, and common-sense reasoning. Its scale and multilingual training data make PaLM especially effective for global use cases.
  • Gemini: Designed for multimodal interactions, Gemini processes text, images, audio, and video. Gemini enables rich, context-aware dialogues and integrates seamlessly into products like search and personal assistants, delivering a more holistic AI experience.
  • Claude Anthropic: Prioritizes safety, ethics, and conversational control. Claude specializes in delivering natural-sounding, user-friendly responses that align closely with guidelines, making it an appealing choice for customer support and content moderation applications.
  • LLaMA by Meta: An open-weight, research-focused language model that emphasizes efficiency and adaptability. LLaMA allows developers and academics to fine-tune its parameters for specialized use cases, encouraging innovation and customization at scale.
  • Cohere Command R: Designed for enterprise and developer teams, Command R specializes in generating helpful, accurate content with a focus on efficiency. Cohere Command R supports chatbot deployment, automated writing, and knowledge retrieval across business domains.
  • Mistral: Lightweight and performance-optimized for scalability, Mistral offers flexibility without compromising on language understanding. Mistral suits real-world business needs like automated support, data enrichment, and rapid content generation for diverse industries.
  • Grok: Developed by xAI, Grok specializes in conversational AI with a focus on real-time information retrieval and witty, engaging responses. Grok integrates live data streams to provide up-to-date answers, making it well-suited for dynamic contexts like social media monitoring and interactive Q&A.
  • Qwen: Created by Alibaba, Qwen is a multilingual language model that excels at generating content across different writing styles and domains. Qwen has strong multilingual capabilities and scalability that make it a practical choice for businesses looking to support diverse, global audiences.
  • DeepSeek: Designed for complex, data-intensive tasks, DeepSeek emphasizes deep contextual understanding and efficient processing of large datasets. DeepSeek supports advanced analytics, content generation, and knowledge discovery, helping organizations leverage AI for research, insight extraction, and innovation.

What is Generative AI?

Generative AI is a branch of artificial intelligence that specializes in creating new, original content across different formats. Generative AI constructs text, images, audio, video, and other data types by learning the underlying structure and patterns from existing examples. This process enables generative AI to produce creative outputs that often feel human-made.

Generative AI draws knowledge from diverse examples such as articles, images, videos, and sound clips, which allows it to tackle a broad range of tasks. Generative AI supports text generation, image creation, video synthesis, and even music composition. The scale and flexibility of generative AI give it the capability to take on new creative tasks without starting from scratch, making it a powerful, adaptable tool across many industries.

How Does Generative AI Work?

Generative AI works by learning patterns and structures from large datasets, then using this knowledge to create new, original content that mimics those patterns. These steps allow the generative AI model to generate original, contextually relevant, and high-quality outputs across various data types.

There are 6 stages in the process of Generative AI. The 6 stages of Generative AI are listed below.

  1. Data Collection: Generative AI consumes large datasets containing diverse types of content, such as images, videos, music, or text, to understand underlying structures and styles.
  2. Encoding: Generative AI breaks down inputs into compressed representations that capture the most important patterns, allowing the model to process and manipulate complex data efficiently.
  3. Training: Models like GANs, VAEs, and diffusion models train on these representations to recognize recurring elements and relationships that define the training data.
  4. Generation: Once trained, generative AI receives prompts or random inputs and synthesizes new content that matches the learned patterns, often generating entirely novel variations.
  5. Refinement: Generative AI iteratively fine-tunes its outputs by comparing them against its learned criteria, adjusting details until they reach a realistic or desired level of quality.
  6. Output Delivery: Generative AI returns a complete image, piece of text, sound clip, or video that exhibits the features and creativity learned from the data.

What Are Some Examples of Generative AI?

There are different examples of generative AI. The examples of generative AI are listed below.

  • DALL·E by OpenAI: A powerful text-to-image model that transforms textual prompts into highly detailed and imaginative visuals. DALL·E enables marketers, designers, and artists to quickly prototype concepts, create unique artwork, and explore diverse visual styles with fine-grained control.
  • Midjourney: A popular generative image tool focused on creating stylized, creative artwork. Midjourney excels at producing cinematic and surreal visuals, making it a go-to for illustrators, concept artists, and content creators who want distinctive, high-impact imagery.
  • Stable Diffusion: An open-source image generation model that lets developers and artists fine-tune and host the technology themselves. Stable Diffusion supports everything from personalized art styles to commercial image generation, allowing flexible integration into creative workflows.
  • Sora by OpenAI: A generative AI model that transforms text prompts into short video clips. Sora enables video creators, animators, and marketers to produce cinematic scenes, visual simulations, and animated marketing content without the need for complex video editing tools.
  • Runway Gen-2: A generative AI platform for text-to-video and image-to-video synthesis. Runway Gen-2 supports short-form video production, aiding media companies and brands looking to craft engaging visual content quickly and efficiently.
  • Jukebox by OpenAI: A model that produces original music tracks across genres and styles based on textual or melodic input. Jukebox benefits game developers, filmmakers, and advertisers by generating unique background scores and musical variations that suit specific moods and themes.
  • MusicLM: A generative model that renders music and soundscapes directly from descriptive prompts. MusicLM empowers sound designers and musicians to explore new melodies, harmonies, and arrangements without starting from scratch, enhancing creativity in multimedia productions.
  • ChatGPT by OpenAI: A generative text-focused model that produces human-like responses across a broad range of topics. ChatGPT helps writers, researchers, marketers, and educators by generating drafts, answering questions, translating languages, and creating structured content. Its language-centered design enables productive conversations and rapid ideation across domains.
5 differences between LLM vs Generative AI

What are the Differences Between LLM vs. Generative AI?

LLMs and generative AI often come up together when discussing AI capabilities, but they offer different strengths and serve different purposes. Exploring key differences between an LLM and Generative AI reveals where each one excels and why knowing these differences is important when choosing tools for specific goals.

There are 5 core differences between LLM and Generative AI. The 5 core differences between LLM and Generative AI are listed below.

1. Scope of Application

Generative AI covers a much broader range of uses because it works with different kinds of data beyond just text. That flexibility in Generative AI opens doors for creative domains like image design, music production, and even synthetic video creation. LLMs, by contrast, stay rooted in language. This specialization in LLMs means they excel at text-driven tasks like conversation, summarization, and drafting content. Businesses decide based on their needs, creative assets benefit most from generative AI, while research, writing, and customer communication suit LLMs better.

2. Output Modality

Generative AI delivers rich visual or auditory outputs that make it ideal for marketing campaigns, entertainment, and design-driven products. LLMs, however, produce only textual content, which limits their output to written responses. This limitation in LLMs is not necessarily a weakness because language-centric tools often provide more depth and precision when working with policies, legal drafts, or data analysis. Choosing one over the other depends on what kind of content an organization needs to generate.

3. Training Data

Generative AI models train on a huge mix of visual, audio, and textual data, which enables them to create diverse kinds of content. LLMs draw on enormous bodies of written text to sharpen their understanding of grammar, context, and nuance. The richer the textual data they receive, the better they mimic authentic communication styles. Generative AI benefits from broad training to make highly visual or auditory content feel lifelike, while LLMs benefit most from curated language datasets that enhance precision.

4. Core Goal

Generative AI exists to produce new, often imaginative content that reflects the patterns it learned from its training. LLMs aim to understand intent and meaning so they respond thoughtfully and clearly. That is why generative AI suits creators looking for novel concepts or visual prototypes, while LLMs help professionals produce accurate, detailed answers. Each model has a goal that reflects specific strengths. Generative AI emphasizes creativity, and LLMs focus on understanding and communication.

5. Relationship

LLMs are one part of the larger generative AI ecosystem. They represent a specialized branch that deals with language understanding and text production. Generative AI includes them alongside other tools like image and video generators, each with its own niche application. Companies aiming for a complete AI strategy often integrate both kinds. They use LLMs for accurate textual insight and generative AI for vivid, engaging visual assets. Recognizing this relationship empowers teams to put the right tool to work on the right challenge.

Key Differences Between LLM and Generative AI

AspectLLMsGenerative AI
Scope of ApplicationFocused on text-based tasks like summarization, translation, and Q&ABroad, produces text, images, audio, video, and more
Output ModalityText onlyMultiple formats (images, video, music, text, code)
Training DataLarge-scale textual datasetsDiverse datasets (images, video, audio, text, etc.)
Core GoalUnderstand language and generate accurate, human-like responsesCreate new, often imaginative content across different media
StrengthsLanguage understanding, natural conversation, and textual precisionCreative asset generation, multimodal content creation
Example ModelsGPT-4, PaLM, LLaMA, ClaudeDALL·E, Midjourney, Runway, Sora, MusicLM
RelationshipA specialized branch of generative AI focused on languageA broad category that includes LLMs and other content generators

How Does LLM and Generative AI Complement Each Other?

LLMs drive better understanding by processing language and analyzing meaning. LLMs identify user intent and structure responses around that intent. LLMs build on language patterns to craft helpful, accurate text that matches the conversation. Generative AI takes this textual guidance and transforms it into visual, auditory, or interactive media that enriches the user experience. Together, they produce content that feels dynamic and relevant.

Generative AI responds creatively by generating images, videos, and audio that reflect the ideas LLMs describe. Generative AI enhances these concepts by producing engaging, customized assets for different formats and audiences. LLMs supply the context and detail that generative AI needs to create more polished outputs. Generative AI and LLMs integrate smoothly into content workflows that scale across teams and industries.

LLMs and generative AI deliver versatile AI-driven solutions that improve decision-making and creativity. LLMs help businesses recognize patterns and highlight key insights. Generative AI turns those insights into visualizations, marketing assets, and interactive presentations that add value at every stage. This close relationship between LLMs and generative AI supports faster innovation, stronger communication, and a more engaging user experience.

What are the Use Cases of LLM and Generative AI?

LLMs and generative AI serve many industries by providing powerful, specialized capabilities. LLMs focus on language understanding and text-based automation, while generative AI expands creative possibilities by generating diverse media types, including images, audio, and video. Together, they drive innovation, enhance productivity, and enable personalized experiences across various sectors.

LLM Use Cases

There are many use cases for LLMs. The many use cases for LLMs are listed below.

  • Content Creation and Generation: LLMs assist with drafting and refining emails, blog posts, articles, social media content, and reports. LLMs generate creative formats like poems, scripts, or even code snippets, streamlining the content production process while maintaining style and coherence.
  • Customer Service and Support: LLM-powered chatbots and virtual assistants handle complex customer inquiries by understanding intent and context. LLM chatbots deliver personalized support, recommend products, and guide users through troubleshooting steps 24/7, improving customer satisfaction and reducing operational costs.
  • Language Translation and Localization: LLMs provide accurate and context-aware translations that go beyond word-to-word conversions. LLMs adapt messaging for different cultures and languages, helping global brands localize content effectively and reach wider audiences.
  • Data Analysis and Research: LLMs summarize lengthy documents, extract key insights from unstructured data, and assist in hypothesis generation for scientific research. LLMs enable analysts to process vast amounts of text quickly, improving decision-making speed and accuracy.
  • Business and Finance: LLMs analyze financial reports, detect potential fraud by identifying unusual transaction patterns, and assist in risk assessment by interpreting market data and news. LLMs generate customer insights for personalized marketing and product recommendations.
  • Healthcare: LLMs support medical professionals by analyzing patient records to suggest diagnoses, generating treatment plans, and helping with medical research by synthesizing literature. LLMs provide patients with clear and accessible explanations of medical information and answer health-related questions.
  • Education: LLMs create personalized learning materials, automate grading and feedback, and act as virtual tutors. LLMs adapt to different learning styles and help students with explanations or practice questions, making education more accessible and engaging.
  • Cybersecurity and Legal: LLMs analyze security logs to identify threats, automate contract review, assist with legal research, and help draft legal documents. By doing this, LLMs improve efficiency and reduce human error in these sensitive fields.

Generative AI Use Cases

There are many use cases for Generative AI. The many use cases for Generative AI are listed below.

  • Business and Marketing: Generative AI produces targeted advertising creatives, personalized campaign content, and customized sales materials. Generative AI generates product descriptions, creates branding visuals, and designs dynamic email templates. This helps companies engage customers with fresh, relevant content.
  • Content and Media Production: Tools like image and video generators create original artwork, animations, and soundscapes. Generative AI supports film, gaming, and entertainment industries by accelerating concept art creation, scene generation, and soundtrack composition, reducing time and cost.
  • Product Design and Engineering: Generative AI explores innovative product designs by generating multiple iterations based on design constraints. Generative AI aids architects by creating floor plans and 3D models, while optimizing manufacturing processes with CAD/CAM enhancements, enabling faster prototyping and more efficient production.
  • Healthcare and Drug Development: Generative AI accelerates drug discovery by simulating molecular structures and predicting compound efficacy. Generative AI generates synthetic patient data for clinical trials and helps create personalized medicine plans based on complex datasets.
  • Education and Training: Generative AI builds adaptive learning experiences with personalized visual aids, interactive simulations, and automated grading tools. Generative AI helps educators develop creative materials that cater to individual student needs, enhancing engagement and retention.
  • Retail and E-commerce: Generative AI personalizes the shopping experience by producing virtual try-on visuals, recommending products based on user behavior, and generating dynamic advertisements. This drives customer satisfaction and boosts conversion rates.
  • Finance and Fraud Detection: Generative AI creates synthetic financial data to test fraud detection systems, simulates market scenarios for risk analysis, and generates financial models that support investment strategies. These capabilities in Generative AI improve the robustness and accuracy of financial operations.

What Are The Challenges of LLM?

LLMs hold great promise for transforming industries, but they introduce practical and ethical hurdles that require careful attention. Understanding these challenges in LLMs is crucial to successful deployment and responsible innovation.

There are 8 main challenges of using an LLM. The 8 main challenges of using an LLM are listed below.

1. Data Quality and Bias

LLMs rely on massive datasets pulled from diverse online sources, which often include inconsistent, outdated, or biased information. Poor data quality in LLMs introduces errors and shapes outputs toward harmful stereotypes or culturally insensitive content. Even a subtle skew in training data influences how an LLM responds to certain topics. Addressing these issues in LLMs requires rigorous data curation, balanced representation, and ongoing monitoring of output to identify and mitigate unintended bias.

2. Hallucinations and Misinformation

Despite their fluency, LLMs do not have a true understanding of the information they produce. This leads to “hallucinations,” where the LLM model invents plausible-sounding but incorrect or unverifiable answers. Such misinformation in LLMs misleads users, erodes trust, and causes serious consequences in domains like healthcare, finance, or legal advice. Reducing hallucinations requires fact-checking pipelines, fine-tuned models trained on verified data, and techniques like retrieval-augmented generation to ground responses in up-to-date, trusted sources.

3. Computational and Energy Costs

Training and deploying LLMs at scale consumes immense computational power and energy. Training a single large model emits as much carbon as operating several cars for years. Beyond environmental impact, these costs in LLMs restrict access to organizations with substantial computing budgets. Balancing model size with efficiency requires optimized hardware and investments in green computing initiatives that reduce sustainability and cost challenges.

4. Lack of Explainability and Transparency

LLMs are often treated as black boxes that generate text without clear, human-understandable reasoning. This opacity in LLMs makes it difficult to explain decisions or identify why the model produced a specific answer. Lack of transparency erodes trust among users and poses risks in regulated sectors like law, finance, or healthcare. Researchers and companies must work toward more explainable AI solutions, which include interpretability tools and model debugging techniques, to increase transparency and make their systems easier to audit.

5. Ethical and Privacy Concerns

LLMs inadvertently reproduce or even amplify harmful, illegal, or private content present in their training data. Personal data might appear in LLM model outputs without consent, creating privacy violations and compliance risks. Techniques like data anonymization, strict filtering of training corpora, and content moderation policies help mitigate these issues. Establishing strong ethical guidelines and involving diverse oversight teams further supports responsible deployment.

6. Scalability and Deployment Complexity

Scaling LLMs for real-world use requires specialized hardware, complex architectures, and significant engineering expertise. Balancing LLM model performance with practical deployment needs, such as low-latency responses and cost-effectiveness, introduces complexity. Efficient deployment often requires model compression, hardware acceleration, or modular design to scale up or down based on demand without compromising quality or responsiveness.

7. Rapid Obsolescence and Continuous Updating

LLMs evolve quickly as new research and better training techniques emerge. A LLM model that performs well today quickly becomes outdated as language, world events, and expectations change. This constant need for retraining or fine-tuning in LLMs requires substantial investment in engineering and data. Implementing continuous update processes and monitoring emerging trends helps keep LLMs accurate, relevant, and competitive in fast-moving domains.

8. Compliance with Regulations and Standards

As AI laws and policies tighten worldwide, organizations must make sure LLMs comply with evolving regulations on data privacy, algorithmic fairness, accessibility, and transparency. Teams must navigate these legal frameworks in LLMs with legal expertise and careful system design to reduce liability. Companies that proactively embrace regulatory requirements, conduct impact assessments, and implement compliance checks prepare themselves to scale their AI solutions sustainably.

What Are The Challenges of Generative AI?

Generative AI drives innovation across media, design, and marketing, but its rapid adoption raises significant risks and obstacles. Understanding the challenges with Generative AI allows organizations to plan for responsible use and more sustainable deployment.

There are 8 main challenges of using Generative AI. The 8 main challenges of using Generative AI are listed below.

1. Intellectual Property and Copyright Issues

Generative AI often learns from existing images, text, and other media that are protected by copyright. Without proper licensing or attribution, Generative AI models risk infringing on the rights of creators or producing outputs that closely mimic protected works. Companies must navigate a complex legal landscape and implement guidelines that respect ownership, track data provenance, and encourage ethical content generation.

2. Misinformation and Deepfakes

Generative AI produces realistic yet fake images, videos, and articles that might spread misinformation or manipulate public perception. Generative AI deepfakes in particular harm reputations, influence elections, or cause public panic. Counteracting this requires robust detection tools, authentication mechanisms, and responsible usage policies to ensure that synthetic content is transparent and verifiable.

3. Data and Model Biases

Models reflect the biases present in the data they are trained on. Generative AI amplifies these biases and generates content that is discriminatory, offensive, or unfair. Addressing Data and model bias in Generative AI requires diverse, balanced datasets and regular audits of model outputs. Developers must implement content filters and fine-tuning processes to catch harmful outputs before deployment.

4. High Computational and Energy Demands

Training and running generative AI models at scale consumes significant computational power and energy, which raises both financial and environmental concerns. Hardware requirements put generative AI tools out of reach for smaller organizations. Optimizing generative AI models for efficiency requires specialized hardware like GPUs or TPUs and green AI practices that help manage these resource requirements.

5. Quality and Consistency Control

Generative AI produces highly variable outputs, which complicates quality assurance. Outputs do not meet brand guidelines, contain errors, or lack stylistic consistency. In generative AI, human oversight and post-processing tools remain essential to maintain content standards. Building checks into automated workflows keeps outputs on message and meets performance expectations.

6. Ethical and Privacy Concerns

Generative AI can produce content that violates personal privacy or ethical standards, particularly when trained on data containing personal identifiers or sensitive information. Generative AI generates harmful or explicit material when misused. Implementing strict data handling policies, content filters, and transparent opt-in and opt-out options protects privacy and upholds ethical obligations.

7. Scalability and Deployment Complexity

Moving generative AI from pilot tests into production introduces complexity, especially as demand grows. Load balancing, latency control, and model versioning all require advanced engineering for stable, real-time responses. Efficient generative AI deployment strategies use model pruning, caching, and hybrid architectures to manage these technical hurdles.

8. Regulatory and Compliance Uncertainty

Regulators worldwide are still developing standards for generative AI, which keeps legal compliance as a shifting target. Laws around data usage, synthetic media labeling, and content safety change often. Staying proactive with compliance assessments while using generative AI requires industry best practices and policy reviews that help organizations reduce legal risk and build trust as the regulatory landscape evolves.

What Is the Future of LLM and Generative AI?

The future of LLMs and generative AI shines with innovation, accessibility, and enhanced purpose. Emerging agentic AI models will act autonomously and reason through tasks. LLMs and Generative AI will connect with external tools to simplify workflows. These systems will help manage appointments, write reports, and complete analyses.

Model improvements will focus on efficiency, context awareness, and multimodal abilities. Systems will understand text, images, audio, and even live data streams for richer outputs. Organizations will use fine-tuned LLMs for legal, financial, or healthcare tasks. Generative AI will continue to support creative content, virtual simulations, and synthetic training data.

Regulation and governance will shape the future of these technologies. State-level frameworks like those in California and New York are setting early standards. International treaties aim to ensure transparency and accountability. Ethical AI will become a core part of every deployment. Without responsible practices, companies will lose trust and face legal risks.

This future will demand balanced progress that combines powerful AI with sustainability and fairness. LLMs and generative AI will change content creation, decisions, and services across industries. Progress will matter most when powered by strong ethics and real transparency.

Drive Your Revenue to New Heights

Unleash Your Brand Potential with Our Award-Winning Services and Cutting-Edge Software. Get Started with a FREE Instant Site Audit.

close icon
Real-time SEO Auditing & Issue Detection

Get detailed recommendations for on-page, off-site, and technical optimizations.

SearchAtlas dashboard showing website audit overview including site health, total pages, issues, page changes, and Core Web Vitals