Growth Engineering Glossary
I'm in the process of preparing for a new engineering role on a Growth team. I've worked heavily in digital marketing and growth projects, and I've worked in Rails engineering roles, but this is my first engineering role 100% focused on growth team.
So, to help me acclimate to the role, I decided to analyze a few hundred top ranking webpages on growth engineering topics and extract out the most commonly used phrases into a Glossary format.
Below is a curated version of the output and terms I found helpful, and a helpful outline/mental structure for thinking about grouping these topics.
Table of Contents
- Methodology and Process Terms
- Role Titles and Team Structures
- A/B Testing and Experimentation Vocabulary
- Feature Flags and Rollout Terminology
- Growth Loops and Growth Model Terminology
- Network Effects Terminology
- Viral Mechanics and Referral Terminology
- PLG (Product-Led Growth) Terminology
- Activation, Retention, and Engagement Metrics
- Churn Terminology
- Funnel Optimization Language
- AARRR Pirate Metrics Framework
- Onboarding and User Journey Phrases
- Technical Growth Infrastructure Terms
- Attribution and Analytics Terminology
- Mobile Attribution Terms
- Identity and User Identification Terms
- Monetization and Conversion Optimization Language
- Pricing Optimization Terms
- North Star Metrics and Goal-Setting Vocabulary
- SaaS and Revenue Metrics
- Key benchmarks reference
- Sources consulted
- Methodology
Methodology and Process Terms
Growth Engineering — The systematic, technical approach to growth using data-driven experiments rather than gut-driven decisions. Often described as "writing code to make a company money." [Eric Feng] [Pragmatic Engineer]
Build to Learn — Growth engineering philosophy where code is shipped primarily to learn and validate hypotheses, not to build lasting features. Contrasts with product engineering's "build to last" approach. [Alexey MK]
Kill Your Darlings — Growth engineering mindset of willingness to throw away failed experiments without attachment, essential for rapid iteration. [Alexey MK]
Hypothesis-Driven Development — Approach where features and experiments are designed around testable hypotheses with clear success criteria. [Atlassian]
Experiment Roadmap — Prioritized backlog of growth experiments separate from the product roadmap. [Productboard]
RICE Framework — Prioritization framework: Reach × Impact × Confidence ÷ Effort. Used to prioritize growth experiments. [Product School]
ICE Framework — Simplified prioritization: Impact × Confidence × Ease, scored 1-10 for each factor. [CXL]
Fake Door Test — Offering a not-yet-available feature to gauge customer interest before investing in full development. [ProdPad]
Tent vs Skyscraper — Metaphor for growth vs product engineering approaches. Tents (growth) optimize for speed of setup/teardown; skyscrapers (product) optimize for durability and permanence. [Alexey MK]
Growth Engineering vs Product Engineering — Product engineers build core product features for longevity; growth engineers optimize business metrics through rapid experimentation. [Pragmatic Engineer]
Growth Engineering vs Marketing Engineering — Marketing engineers serve internal teams and focus on productivity tools. Growth engineers focus on business metrics and customer-facing optimization. [Pragmatic Engineer]
Role Titles and Team Structures
Growth Engineer — A software engineer who combines technical expertise with data-driven strategies to drive user acquisition, engagement, and revenue growth. Unlike product engineers who build features for longevity, growth engineers optimize business metrics through rapid experimentation. Origins trace to Facebook in 2007 under Chamath Palihapitiya. [Pragmatic Engineer] [UserGuiding]
Growth PM (Growth Product Manager) — A product manager responsible for improving specific business metrics rather than owning a specific product area. Focuses on short-term experiments across the entire user funnel—from acquisition through monetization. [ProductLed] [Userpilot]
Growth Hacker — Marketing-oriented role focused on creative, low-cost strategies for customer acquisition. Coined by Sean Ellis in 2010. Relies on non-technical solutions and shortcuts, unlike growth engineers who use systematic, technical approaches. [Medium]
Growth Designer — Designer within growth teams who creates user interfaces optimized for engagement and conversion. Works closely with growth engineers on A/B test variations and rapid experimentation. [Productboard]
Growth Data Analyst / Product Analyst — Data specialists who dive deep into user data, extract insights, decode behavior patterns, monitor KPIs, and provide actionable recommendations to inform growth strategy. [Productboard]
Technical Growth Marketer — Marketer who understands backend development, APIs, and script building. Bridges the gap between marketing and engineering. [Pragmatic Engineer]
VP of Growth / Head of Growth — Executive responsible for overall growth strategy, team structure, and organizational alignment. [Andrew Chen]
Chief Growth Officer (CGO) — Executive position leading growth teams; responsible for setting strategy, goals, and tasks across all growth initiatives. [Mixpanel]
Growth Team — Cross-functional team dedicated to driving user acquisition, engagement, and retention. Typically includes engineers, PMs, designers, data analysts, and sometimes marketers. [Product School] [CXL]
Independent Growth Team — Standalone team with its own PM, engineers, designers, and analysts functioning as a "mini startup" within the company. [Andrew Chen]
Embedded Growth Function — Growth specialists who sit within core product teams and own specific metrics. [CXL]
Hybrid Growth Model — Combination of independent and embedded approaches; adapts to company needs and scales with organizational growth. [Product School]
Growth Pod — Small, focused team carved out to work full-time on key growth initiatives. [Mixpanel]
Owner Model — Growth team structure where growth engineering operates independently with full ownership of their domain and metrics. [Pragmatic Engineer]
Hitchhiker Model — Growth engineers who work alongside other product teams, contributing growth expertise while those teams retain primary ownership. [Pragmatic Engineer]
A/B Testing and Experimentation Vocabulary
A/B Testing (Split Testing) — Core growth engineering methodology comparing two versions (control vs. variant) to measure which performs better on specific metrics. [ProdPad] [Netflix Tech Blog]
Control Group — The group of users that sees the original, unchanged version of the experience. Serves as the baseline for comparison. [Mida]
Treatment (Variant) — The new or modified version of the experience being tested against the control. [Mida]
A/B/n Test — A variation of A/B testing where multiple variants (n) are tested against a control simultaneously. [VWO]
A/A Test — An experiment where identical versions are tested against each other to validate the testing framework and ensure there are no systemic biases. [BigDATAwire]
Multivariate Test (MVT) — An experiment that tests multiple variables and their combinations simultaneously to determine which combination produces the best outcome. [Braze]
Holdout Group — A subset of users deliberately excluded from receiving new features to serve as a long-term baseline for measuring cumulative impact. [Netflix/SlideShare]
Statistical Significance — An indicator that the observed difference between control and treatment groups is unlikely to have occurred by random chance alone. Typically measured at 95% confidence level (α = 0.05). [Mida]
P-value — The probability of observing results as extreme or more extreme than the observed results, assuming the null hypothesis is true. [Mida]
Confidence Level — The level of certainty (e.g., 95%) that the true effect lies within the confidence interval. [Mida]
Confidence Interval — The range of values within which the true metric value is likely to fall if the experiment were repeated many times. [Mida]
Minimum Detectable Effect (MDE) — The smallest true effect size that an A/B test can reliably detect with specified statistical power. [BigDATAwire]
Statistical Power — The probability that an experiment will correctly detect a real effect as statistically significant when it exists. Convention is 80% power. [Mida]
Type I Error (False Positive) — Incorrectly rejecting the null hypothesis when there is actually no real effect. [Mida]
Type II Error (False Negative) — Failing to reject the null hypothesis when a real effect actually exists. [Mida]
Null Hypothesis — The assumption that there is no significant difference between the control and treatment groups. [Mida]
Sample Size — The number of visitors or users included in an experiment. Larger samples increase power and enable detection of smaller effects. [Mida]
Lift — The percentage improvement in a metric between the treatment and control groups. Calculated as (Treatment - Control) / Control × 100%. [Mida]
Primary Metric (Success Metric) — The central metric that the experiment aims to optimize, directly tied to business goals. [VWO]
Guardrail Metric — Secondary metrics monitored during experiments to ensure changes don't have unintended negative consequences. [Eppo] [VWO]
Diagnostic Metric — Metrics providing deeper insight into how an experiment affected user behavior beyond the primary success metric. [VWO]
CUPED (Controlled-experiment Using Pre-Experiment Data) — A variance reduction technique using pre-experiment data to reduce noise and accelerate experiment results by 20-65%. [BigDATAwire]
Sequential Testing — A method allowing data analysis at multiple points during an experiment while maintaining statistical validity. [Mida]
Bayesian Inference — A statistical approach using prior beliefs updated with observed data to calculate probability distributions of effects. [Mida]
Frequentist Statistics — The traditional statistical framework using fixed hypothesis testing, p-values, and confidence intervals. [Mida]
Bonferroni Correction — A statistical adjustment reducing false-positive rates when conducting multiple hypothesis tests. [Mida]
Multi-Armed Bandit (MAB) — An adaptive algorithm that dynamically allocates traffic to better-performing variants in real-time. [VWO] [Wikipedia]
Thompson Sampling — A Bayesian bandit algorithm that samples from posterior distributions to determine which variant to show. [Braze]
Contextual Bandit — An advanced MAB that uses context about users, variants, and environment to make personalized allocation decisions. [Braze]
Sample Ratio Mismatch (SRM) — A diagnostic check detecting when actual traffic split differs significantly from intended allocation. [BigDATAwire]
Stratified Sampling — Random distribution with enforcement of sample proportions across segments to ensure representative allocation. [Mida]
Novelty Effect — A temporary change in user behavior due to the newness of a feature, which may fade over time. [Mida]
Primacy Effect — Users preferring original experiences due to familiarity, potentially biasing results against new variants. [Mida]
Winner's Curse — The phenomenon where winning variants' effects appear inflated when measured individually. [Mida]
Experiment Velocity — The rate at which an organization can run experiments, often measured in experiments per week or month. [Pragmatic Engineer]
Overall Evaluation Criterion (OEC) — A composite metric combining multiple signals into a single measure for experiment decision-making. [Netflix Tech Blog]
Feature Flags and Rollout Terminology
Feature Flag — A software development technique that enables or disables features without code deployment, allowing controlled feature releases and experiments. [Pragmatic Engineer]
Progressive Rollout (Percentage Rollout) — Gradually increasing the percentage of users exposed to a feature over time (e.g., 10% → 25% → 50% → 100%).
Kill Switch — An emergency feature flag that can instantly disable a feature if problems are detected.
Targeting Rules — Logic determining which users see which flag variation based on attributes like user properties, device type, or location.
Fallthrough (Default Rule) — The rule applied when no specific targeting conditions match.
Release Flag (Rollout Flag) — A temporary boolean flag used to control incremental feature releases.
Experiment Flag — A feature flag specifically configured for A/B testing, often with multiple variations and associated metrics.
Canary Release (Canary Test) — Releasing a change to a small subset of users before broader rollout to detect issues early.
Dark Launch — Deploying code to production without exposing it to users, enabling testing of infrastructure before feature activation.
Bucketing — The process of assigning users to experiment variants, typically using consistent hashing of user identifiers.
Exposure — The moment when a user is shown a variant and their assignment is logged for analysis.
Growth Loops and Growth Model Terminology
Growth Loop — A closed system where inputs go through a series of pre-defined steps to generate outputs, which are then reinvested as new inputs. Unlike linear funnels, loops compound over time. Reforge popularized this framework. [Ortto]
Viral Loop — A mechanism where users refer others to a product, and those referrals become referrers through the same mechanism, creating a self-reinforcing cycle. [Ortto]
Content Loop — A growth loop where content creation and distribution attracts users who then create more content. Can be user-generated (Pinterest) or company-generated (HubSpot). [Ortto]
Paid Loop — A growth loop where paid advertising attracts users/customers, generating revenue that can be reinvested into more paid advertising. [Ortto]
Sales Loop — A loop where a sales force acquires customers, generating revenue that funds hiring more salespeople. [Ortto]
Acquisition Loop — Any loop focused specifically on bringing new users into the product.
Loop Velocity — The speed at which a growth loop completes one full cycle. Faster velocity means faster compounding growth.
Loop Efficiency — A measure of how much output a loop generates relative to its inputs.
K-Factor (Viral Coefficient) — The number of new users each existing user generates through invitations or referrals. Formula: K = i (invites sent per user) × c (conversion rate of invites). A K-factor >1 indicates exponential growth. [Yotpo]
Viral Cycle Time — The time from a user joining to successfully referring a new user who joins. Shorter cycle times accelerate viral growth. [Yotpo]
Compounding Growth — Growth that builds upon itself over time, creating exponential rather than linear growth curves.
Critical Mass — The minimum number of users required for a network or viral effect to become self-sustaining.
Growth Flywheel — A framework visualizing growth as a circular, self-reinforcing system rather than a linear funnel. [Chameleon]
Flywheel Friction — Anything that slows down the flywheel's momentum—poor onboarding, confusing pricing, bad UX. [Chameleon]
Network Effects Terminology
Network Effect — When a product becomes more valuable as more people use it. Responsible for 70% of value created by tech companies since 1994.
Direct Network Effect — When increased usage directly increases value to users (e.g., telephone networks—more users = more people to call).
Indirect Network Effect — When increased usage of one side increases value for a different side (e.g., more iOS users = more apps developed).
Two-Sided Network Effect — Networks with two distinct user groups (supply and demand) that provide complementary value to each other.
Data Network Effect — When product value increases with more data, and usage generates more useful data.
Metcalfe's Law — The value of a network is proportional to the square of the number of users (N²).
Reed's Law — Network value increases exponentially (2^N) as sub-groups can form within the network.
Marketplace Network Effect — Two-sided effect where buyers attract sellers and sellers attract buyers.
Platform Network Effect — Supply side engineers products specifically for the platform (e.g., iOS app developers).
Asymptotic Network Effect — When initial supply quickly adds value but additional supply yields diminishing returns.
Multi-tenanting — When users can participate in multiple competing networks simultaneously, weakening network lock-in.
Market Tipping — When one network gains enough advantage that it "pulls away" from competitors.
Bandwagon Effect — Social pressure to join a network to avoid being "left out."
Network Density — The number of connections between nodes in a network. Higher density = more valuable network.
Viral Mechanics and Referral Terminology
Referrer/Advocate — The existing customer who recommends a product to others and earns rewards for successful referrals. [Yotpo]
Referee/Referred Friend — The new customer who discovers a brand based on an advocate's recommendation. [Yotpo]
Double-Sided Incentive (Give X, Get Y) — A referral structure where both the referrer and the referred friend receive rewards. [Yotpo]
Referral Link/Code — A unique identifier assigned to each referrer for tracking successful referrals and attributing rewards. [Yotpo]
Referral Loop — The complete cycle: advocate makes purchase → joins program → shares link → friend converts → both get rewards → friend becomes advocate. [Yotpo]
Invite Mechanics — The system design for how users invite others—including invite prompts, sharing channels, friction reduction, and timing.
Referral Rate — The percentage of users who successfully refer at least one new user within a given time period.
Share Rate — The percentage of users who share referral links, regardless of conversion outcome.
Tiered Referral Program — A system offering increasingly valuable rewards as advocates refer more customers. [Yotpo]
Inherent Virality — When the product's core use case naturally involves sharing (e.g., Calendly sends invites by its nature) versus engineered virality through incentives.
Word-of-Mouth (WOM) — Organic sharing and recommendations between people.
Social Proof — The psychological phenomenon where people look to others' actions to determine their own.
PLG (Product-Led Growth) Terminology
Product-Led Growth (PLG) — A business methodology where user acquisition, expansion, conversion, and retention are all driven primarily by the product itself rather than sales or marketing. Coined by OpenView's Blake Bartlett in 2016. [Chameleon]
Product Qualified Lead (PQL) — A lead who has experienced the product's value firsthand (via free trial or freemium) and demonstrated buying intent through product usage behaviors. [Chameleon]
Product Qualified Account (PQA) — The account-level equivalent of a PQL for B2B, identifying high-value sales opportunities based on collective usage patterns.
Time to Value (TTV) — The time it takes for a user to realize meaningful value from the product after first use. [ProductLed]
Time to First Value (TTFV) — Specifically measures the time until a user experiences the initial benefit.
Self-Serve — A go-to-market model where users can discover, evaluate, adopt, and purchase the product without requiring sales assistance.
Freemium — A pricing model offering free access to basic features, with paid upgrades for premium functionality.
Free Trial — A time-limited full access model that lets users experience the complete product before committing to purchase.
Reverse Trial — A model where users start with full premium features, then downgrade to free tier after the trial ends.
PQL Rate — The percentage of new signups reaching PQL status.
PQL to Paid Conversion Rate — The percentage of PQLs who convert to paying customers.
Time to PQL — How long it takes a user to reach PQL status.
Product-Led Sales (PLS) — A hybrid approach combining PLG's bottom-up motion with traditional top-down sales, using product usage data to identify sales opportunities.
Value Metric — The unit of measure that aligns pricing with customer value (e.g., number of users, messages sent, storage used).
Bottom-Up Adoption — When end users adopt a product independently, then drive organizational adoption.
Land and Expand — Strategy of starting small within an organization (land) then growing usage and revenue within that account over time (expand).
PLG Flywheel Stages
Evaluator — First stage user exploring whether the product might solve their problem. [Chameleon]
Beginner — User who has started using the product but hasn't activated. [Chameleon]
Regular — Activated user who has adopted the product into their workflow. [Chameleon]
Champion — Highly engaged user invested in the product's success who becomes an advocate. [Chameleon]
Activate → Adopt → Adore → Advocate — The PLG Flywheel action sequence connecting user stages. [Chameleon]
Activation, Retention, and Engagement Metrics
Aha Moment — The pivotal point when a user suddenly realizes and experiences the core value of a product. Examples: Facebook's 7 friends in 10 days, Twitter's following 30 people, Slack's 2,000 messages exchanged. [Amplitude]
Activation — The process of turning new sign-ups into engaged users who clearly understand and experience product value. [ProductLed]
Activation Rate — The percentage of users who complete a defined activation milestone. Industry average for SaaS is approximately 37.5%.
Activation Event — A specific, measurable action that signifies a user has experienced product value.
Activation Velocity — A metric measuring how quickly a cohort of users activates over time. [ProductLed]
Setup Moment — The point when users complete essential configuration steps that enable them to derive value.
Daily Active Users (DAU) — The number of unique users who engage with a product within a 24-hour period. [Geckoboard]
Weekly Active Users (WAU) — The number of unique users who engage with a product over a 7-day period.
Monthly Active Users (MAU) — The number of unique users who engage with a product over a 30-day period. [Geckoboard]
DAU/MAU Ratio (Stickiness) — The proportion of monthly active users who engage daily. Standard benchmark is 10-25%; top apps like WhatsApp exceed 50%. [Geckoboard] [Gainsight]
Stickiness — How often users return to engage with a product. A sticky product becomes part of users' daily routines. [Wall Street Prep]
Retention Rate — The percentage of users who continue engaging with a product over a specific time period. [Amplitude]
Retention Curve — A visual representation showing how user retention changes over time, typically declining before reaching an asymptote. [Amplitude]
Retention Asymptote — The point where a cohort's retention curve flattens, indicating the percentage of users expected to remain long-term.
N-Day Retention — Retention measured at specific intervals after sign-up (Day 1, Day 7, Day 30, etc.). [Adjust]
Rolling Retention — Measures users who return on or after a specific day, rather than on that exact day.
Cohort — A group of users who share a common characteristic tracked together over time. [Headline]
Cohort Analysis — A method of grouping users by shared characteristics and tracking their behavior over time. [Medium]
Acquisition Cohort — Users grouped by when they were acquired (e.g., all January sign-ups). [Amplitude]
Behavioral Cohort — Users grouped by actions they've taken within the product. [Amplitude]
Power Users — The most engaged segment of users who use the product frequently and deeply.
Churn Terminology
Churn Rate — The percentage of customers who stop using a product or cancel subscriptions during a given period. [ChurnZero]
Customer Churn (Logo Churn) — The number/percentage of customers lost, regardless of their revenue value. [ChurnZero]
Revenue Churn (MRR/ARR Churn) — The amount of recurring revenue lost due to cancellations and downgrades. [Mercury]
Gross Churn — Revenue lost from churned customers without accounting for expansion revenue. [Mercury]
Net Churn — Revenue lost minus revenue gained from existing customer expansions. Negative net churn is positive. [Mercury]
Voluntary Churn — When customers intentionally choose to cancel or not renew. [Younium]
Involuntary Churn — Customer loss due to payment failures or other non-intentional reasons. [SaaS Academy]
Churn Cohort Analysis — Grouping customers by acquisition time to analyze churn behavior patterns.
Funnel Optimization Language
Conversion Funnel (Sales Funnel/Marketing Funnel) — A visualization of the steps users take from awareness to completing a desired action.
TOFU (Top of Funnel) — The awareness stage where prospects first discover a product.
MOFU (Middle of Funnel) — The consideration stage where prospects evaluate whether a product fits their needs.
BOFU (Bottom of Funnel) — The decision stage where prospects are ready to convert/purchase.
AIDA Model — Attention, Interest, Desire, Action—a classic marketing framework describing customer journey stages.
Conversion Rate — The percentage of users who complete a desired action out of total users at a funnel stage.
Drop-off Rate — The percentage of users who abandon the funnel at a specific step.
Funnel Friction — Any obstacle, confusion, or unnecessary step that causes users to abandon the funnel.
Friction Point — A specific moment in the user journey where users struggle or abandon the process.
Micro-Conversion — Small, interim actions users take on the way to a macro-conversion (e.g., adding to cart before purchase).
Macro-Conversion — The primary goal action (e.g., purchase, subscription sign-up).
Funnel Analysis — Tracking user progression through sequential steps toward a goal, identifying drop-off points. [Medium]
AARRR Pirate Metrics Framework
AARRR (Pirate Metrics) — Dave McClure's 2007 framework tracking five user-behavior metrics: Acquisition, Activation, Retention, Referral, Revenue. Called "pirate metrics" because the acronym sounds like "Arrr!" [Built In]
Acquisition — First stage; how users discover the product. [Built In]
Activation — Second stage; user's first valuable experience with product. [Built In]
Retention — Third stage; keeping users engaged and returning. [Built In]
Referral — Fourth stage; users recommending product to others. [Built In]
Revenue — Fifth stage; monetizing user engagement. [Built In]
RARRA Framework — Alternative framework prioritizing Retention first: Retention, Activation, Referral, Revenue, Acquisition.
Onboarding and User Journey Phrases
User Onboarding — The process of acquainting new users with a product, guiding them from sign-up to activation. [Userpilot]
Onboarding Flow — A step-by-step sequence introducing users to a product's features and value. [Userpilot]
Primary Onboarding — Initial onboarding focused on core features and getting users to their first aha moment. [The User Flow]
Secondary Onboarding — Introducing users to advanced features building on core value. [The User Flow]
Tertiary Onboarding — Account expansion focused on upselling additional features.
Product Tour — A guided walkthrough introducing users to product features and interface. [Appcues]
Interactive Walkthrough — Hands-on guidance where users learn by performing actions within the product.
Onboarding Checklist — A list of tasks guiding users through setup and initial product engagement. [Userpilot]
Progress Bar — A visual indicator showing users how far they've progressed through onboarding.
Empty State — The initial state of a product interface before users add data, often used to provide guidance.
Welcome Survey — A questionnaire during sign-up collecting user preferences to personalize onboarding.
Persona-Based Onboarding — Customizing onboarding flows based on user type, role, or goals.
Progressive Disclosure — Gradually revealing product features to avoid overwhelming new users.
Deferred Account Creation — Postponing registration until users have experienced product value.
Gradual Engagement — Allowing users to experience value before requiring commitment.
Onboarding Completion Rate — Percentage of users who finish the onboarding process.
Tooltip — A contextual hint appearing when users hover over or click UI elements.
Hotspot — A pulsing indicator drawing attention to specific features or actions.
Modal — A pop-up window requiring user attention or action.
Slideout — A panel that slides in from the screen edge with information or prompts.
Auth vs Unauth — Authenticated versus unauthenticated user experiences; different optimization strategies apply to logged-in users versus anonymous visitors. [Pragmatic Engineer]
Technical Growth Infrastructure Terms
Growth Stack — Collection of tools and technologies used by growth teams for analytics, experimentation, automation, and optimization.
Experiment Platform — Infrastructure for standardizing A/B test setup, user bucketing, and statistical methodology. [Netflix Tech Blog]
MarTech (Marketing Technology) — Tools enabling marketers to work without engineering involvement: landing page builders, email platforms, analytics tools.
Customer Data Platform (CDP) — Software that collects, unifies, and activates customer data from multiple sources to create persistent, unified customer profiles. Examples: Segment, mParticle, Rudderstack.
Data Management Platform (DMP) — A platform focused on third-party data for advertising audience targeting.
Event — A discrete user action or system occurrence tracked in analytics (e.g., page_view, button_click, purchase).
Event Properties — Metadata attached to events providing additional context.
User Traits — Attributes stored on user profiles that persist across sessions.
Event Streaming — Real-time continuous transmission of event data as it occurs.
Event Broker (Event Bus) — Infrastructure (e.g., Apache Kafka) that receives, stores, and distributes event streams.
ETL (Extract, Transform, Load) — Traditional data pipeline pattern for data movement and transformation.
ELT (Extract, Load, Transform) — Modern pattern where raw data is loaded directly into a data warehouse, then transformed.
Data Pipeline — Automated system for moving data from source systems through processing stages to destinations.
Reverse ETL — Moving data from a data warehouse back to operational tools for activation.
Data Warehouse — Centralized repository optimized for analytical queries on historical data.
Change Data Capture (CDC) — Technology that detects and captures changes in source databases for real-time synchronization.
Instrumentation — The process of adding tracking code to applications to capture user behavior events.
Tracking Plan — Documentation specifying which events and properties to track, their definitions, and implementation requirements.
Event Taxonomy — The naming conventions, hierarchy, and structure for organizing tracked events consistently.
Auto-Capture (Auto-Track) — Automated collection of user interactions without manual event instrumentation.
Device Mode (Client-Side) — Analytics implementation where tracking libraries load directly on user devices.
Cloud Mode (Server-Side) — Analytics implementation where data is sent to your servers first, then forwarded to destinations.
Attribution and Analytics Terminology
Multi-Touch Attribution (MTA) — A measurement technique that assigns fractional credit to multiple touchpoints along the customer journey.
Last-Touch Attribution — An attribution model that assigns 100% credit to the final touchpoint before conversion.
First-Touch Attribution — An attribution model giving 100% credit to the first marketing touchpoint.
Linear Attribution — A multi-touch model that distributes equal credit across all touchpoints.
Time Decay Attribution — A model assigning progressively more credit to touchpoints occurring closer to conversion.
U-Shaped Attribution (Position-Based) — Assigns 40% credit each to first and last touchpoints, with 20% distributed across middle interactions.
W-Shaped Attribution — Gives 30% credit each to first touch, lead creation, and opportunity creation touchpoints.
Data-Driven Attribution (Algorithmic) — Uses machine learning to dynamically assign credit based on actual conversion path data.
View-Through Attribution — Credits conversions to ad impressions that were viewed but not clicked.
Click-Through Attribution — Credits conversions only to ads that were actually clicked.
Attribution Window (Lookback Window) — The time period during which touchpoints are considered eligible for attribution credit.
UTM Parameters (Urchin Tracking Module) — URL tags added to links to track marketing campaign performance.
utm_source — Identifies the traffic source (e.g., google, facebook, newsletter).
utm_medium — Identifies the marketing medium/channel type (e.g., cpc, email, social).
utm_campaign — Identifies the specific campaign name or promotion.
utm_content — Differentiates between similar content or links within the same campaign.
utm_term — Identifies paid search keywords driving traffic.
Tracking Pixel — A 1x1 transparent image embedded in web pages or emails that fires an HTTP request when loaded.
Incrementality Testing — Controlled experiments measuring the true causal impact of marketing.
Marketing Mix Modeling (MMM) — Statistical analysis evaluating the aggregate impact of marketing spend across channels.
Mobile Attribution Terms
Mobile Measurement Partner (MMP) — A third-party platform that attributes app installs and in-app events to marketing sources. Examples: AppsFlyer, Adjust, Branch, Singular.
IDFA (Identifier for Advertisers) — Apple's device-level identifier for tracking and attribution on iOS.
GAID (Google Advertising ID) — Google's device-level identifier for Android devices.
SKAdNetwork (SKAN) — Apple's privacy-preserving attribution framework providing aggregated, anonymized conversion data.
Deterministic Attribution — Attribution method using exact identifier matches for high-confidence attribution.
Probabilistic Attribution (Fingerprinting) — Statistical method using device characteristics and behavioral patterns for attribution.
Deep Linking — Technology that routes users to specific in-app content rather than just opening an app.
Deferred Deep Linking — Deep linking that works even when the app isn't installed.
Self-Reporting Network (SRN) — Major ad platforms (Meta, Google, TikTok) that perform their own attribution.
Install Referrer — Data passed from app stores containing attribution information about where an app install originated.
Identity and User Identification Terms
Anonymous ID (anonymousId) — A UUID automatically generated for unknown visitors before they authenticate.
User ID (userId) — A persistent, unique identifier assigned to users after they authenticate.
Device ID — A unique identifier tied to a specific device.
Client ID — Browser-specific identifier generated by analytics tools to identify unique browser sessions.
Identity Resolution — The process of connecting multiple identifiers to create unified user profiles across devices and sessions.
Identity Graph — A data structure that maps relationships between different user identifiers.
Identity Stitching — The process of merging previously anonymous user sessions with identified user profiles.
Cross-Device Tracking — The ability to recognize and track the same user across multiple devices.
Monetization and Conversion Optimization Language
CTA (Call-to-Action) — A button, link, or prompt that encourages users to take a specific action.
Friction — Any element that creates resistance or hesitation in the user journey, reducing conversion likelihood.
Bounce Rate — The percentage of visitors who leave a site after viewing only one page without taking action.
Heatmap — A visual representation of user behavior showing where visitors click, scroll, and focus attention.
Session Replay — Recordings of individual user sessions showing exactly how visitors navigate and interact.
Above the Fold — Content visible on a webpage without scrolling.
Social Proof — Evidence that others have used/endorsed a product (testimonials, reviews, logos, user counts).
Paywall — A barrier restricting content/feature access until users pay.
Hard Paywall — All features locked behind payment/subscription after optional free trial.
Soft Paywall — Some features available free while premium features require payment.
Metered Paywall — Users get limited free access (e.g., 3 articles/month) before hitting the paywall.
Dynamic Paywall — Adjusts access based on user behavior, engagement level, or segment in real-time.
Feature Gating — Restricting specific features to paid tiers to drive upgrade behavior.
Usage-Based Pricing — Pricing tied to consumption/usage metrics rather than flat subscriptions.
Tiered Pricing — Multiple pricing levels targeting different user segments and willingness to pay.
Pricing Optimization Terms
Price Elasticity — How demand changes in response to price changes. Elastic demand (>1) = highly responsive; Inelastic (<1) = low sensitivity.
Willingness to Pay (WTP) — The maximum price a customer will pay for a product/feature.
Van Westendorp Price Sensitivity Meter — Survey technique asking four questions about price thresholds to identify optimal price range.
Gabor-Granger Method — Price research technique presenting different price points to measure purchase likelihood.
Conjoint Analysis — Research method presenting product configurations at varying prices to reveal how features impact willingness to pay.
Value-Based Pricing — Setting prices based on perceived customer value rather than cost-plus or competitor pricing.
Price Anchoring — Psychological technique showing expensive options first to make lower tiers appear more reasonable.
Annual Discount — Offering reduced rates for annual commitment (typically 20-30% off monthly equivalent).
North Star Metrics and Goal-Setting Vocabulary
North Star Metric (NSM) — A single, company-wide metric that best captures the core value delivered to customers. Examples: Airbnb's "Nights Booked," Spotify's "Time Spent Listening." [Amplitude]
Input Metrics — The contributing factors that influence a North Star Metric. Common dimensions: breadth (reach), depth (engagement), frequency (cadence), efficiency. [Amplitude]
Output Metrics — Business outcomes that result from North Star Metric improvements (revenue, profit, market share).
OKRs (Objectives and Key Results) — A goal-setting framework where Objectives are qualitative goals, and Key Results are specific, measurable outcomes.
Committed OKRs — Goals the organization must achieve; expected to reach 100% completion.
Aspirational OKRs — Stretch goals aimed at significant progress; 70% achievement is considered successful.
One Metric That Matters (OMTM) — A team-specific focus metric that changes quarterly based on current priorities.
KPIs (Key Performance Indicators) — Quantitative metrics pegged to specific targets used to measure success.
Leading Indicators — Metrics that predict future outcomes and provide early warning signals.
Lagging Indicators — Metrics that measure past performance/outcomes.
SaaS and Revenue Metrics
MRR (Monthly Recurring Revenue) — Predictable monthly revenue from subscriptions.
ARR (Annual Recurring Revenue) — MRR × 12; the key metric for company valuation.
New MRR — Recurring revenue from newly acquired customers in a period.
Expansion MRR — Additional revenue from existing customers through upgrades or add-ons.
Churned MRR — Revenue lost from customers who cancelled during a period.
Contraction MRR — Revenue reduction from existing customers downgrading plans.
Net Revenue Retention (NRR/NDR) — Revenue retained from existing customers including expansion minus churn/contraction. Over 100% indicates net negative churn.
Gross Revenue Retention (GRR) — Revenue retention excluding expansion; measures pure retention capability.
LTV (Customer Lifetime Value) — Total revenue expected from a customer over their entire relationship.
CAC (Customer Acquisition Cost) — Total cost to acquire one new customer, including marketing and sales expenses.
LTV:CAC Ratio — Compares customer lifetime value to acquisition cost. Benchmark: 3:1.
CAC Payback Period — Months required to recover customer acquisition cost.
ARPU/ARPA (Average Revenue Per User/Account) — Total MRR divided by total customers.
ACV (Annual Contract Value) — Annual value of a customer's subscription revenue.
Trial-to-Paid Conversion — Percentage of free trial users who convert to paying customers.
Quick Ratio (SaaS) — Measures growth efficiency: (New MRR + Expansion MRR) ÷ (Churned MRR + Contraction MRR). Benchmark: 4:1.
Unit Economics — The direct revenues and costs associated with a single customer.
NPS (Net Promoter Score) — Customer satisfaction measure based on likelihood to recommend (-100 to +100 scale).
Feature Adoption Rate — Percentage of users who adopt and use a specific feature.
Product Adoption Rate — Percentage of new active users relative to sign-ups over a specific period.
Key benchmarks reference
| Metric | Healthy Benchmark |
|---|---|
| LTV:CAC Ratio | ≥3:1 |
| Quick Ratio (SaaS) | ≥4:1 |
| Net Revenue Retention | ≥100% (Enterprise: 120%+) |
| Freemium Conversion | 2-5% |
| OKR Achievement | 60-70% |
| Monthly Churn | <0.83% (≈10% annual) |
| CAC Payback | <12 months |
| DAU/MAU Stickiness | 10-25% (best-in-class: 50%+) |
| Activation Rate (SaaS) | ~37.5% average |
Sources consulted
This glossary draws terminology from authoritative sources including:
- Engineering blogs from Airbnb, Netflix, Uber, Pinterest, and Dropbox
- Growth methodology content from Reforge, OpenView, and ProductLed
- Analytics platforms including Amplitude, Mixpanel, Segment, and Heap
- Experimentation platforms such as Statsig, LaunchDarkly, Optimizely, and GrowthBook
- Attribution platforms including AppsFlyer, Adjust, and Branch
- SaaS metrics resources from ChartMogul, ProfitWell, and Baremetrics
- Growth-focused publications including Lenny's Newsletter, First Round Review, and CXL
Methodology
I used Claude in deep research mode to build the initial framework for this page in 2025 and have expanded and updated it manually since. Here's a sample of the starting prompt used if you'd like to replicate this for another topic area.
Generate a long list of phrases used in articles and webpages talking about "growth engineer" or "growth engineering" and output it as a glossary type of format please.
Eg "auth vs unauth experience"
Review as many URLs as you can and do a query fan out on those keywords to dive into various sub topics.
RailsGrowth
Comments