Experimentation MCP Server

Ensure that flags and experiments are properly reviewed and approved before changes are published.

  • Effortless Integration: Create flags, configure experiments, and debug issues without switching to the Optimizely dashboard.
  • Streamlined Workflow: Eliminate context switching and complex API orchestration with simple, conversational commands.
  • Accelerated Development: Enhance development workflows with instant local performance and intelligent code generation.
  • Join the beta here

Example of developer asking the Experimentation MCP Server if I have any stale flags that should be cleaned up in my feature experimentation project

Holdout Groups

Measure and report on the true, cumulative impact of your experimentation program. Holdout Groups allow you to set aside a percentage of your audience, intentionnaly excluding them from tests. This creates a pure, unbiased control group against which you can compare the aggregate results of your entire experimentation strategy.

  • Measure Cumulative Impact: Understand the overall uplift and true business value generated by your complete experimentation program.
  • Unbiased Comparison: Compare results from the control group to those who see the feature / variation changes, providing clear insights into net effect.
  • Holistic Performance Insights: Gain a deeper understanding of how much your experiments and features are collectively contributing to your goals.
  • Maintain Measurement Integrity: Run multiple tests simultaneously without compromising the accuracy of your program-level measurement.

A cumulative impact report that shows how our primary metric "Form Submission" performed from those not in the holdout group to those in the holdout group that did not see this experiment and measure the improvement percentage.

Change Approvals

Change Approvals safeguard the quality and safety of feature flag and experiment changes, ensuring that every update is thoroughly vetted before release.

  • Approval Workflows: Set up granular approval actions for individual flags and experiments to maintain control over changes.
  • Team-Based Safeguards: Assign the right team members to approve or reject changes, adding a layer of accountability.
  • Streamlined Collaboration: Use intuitive workflows to manage and track approval processes, facilitating smooth teamwork.
  • Change Rationale: Accept or reject proposed changes with clear justifications, ensuring transparency.
  • Email Notifications: Keep requesters, approvers, and "watchers" informed with timely email updates throughout the approval process.

Contextual Bandits

Unlock true AI-powered 1:1 personalization.

  • Reduce guesswork and drive conversions by serving your visitors the most optimal and effective experience for them
  • Customize bandit algorithms to automatically personalize user experiences
  • See which attributes were used to decide which variation to show users
  • See how each variation performed, and see traffic allocation over time per variation

Screens are for illustrative purposes only and subject to change.

'Fixed Horizon' Stats Engine

A widely-used methodology for analyzing experiment results over the duration of a set period of time, the 'Fixed Horizon' Stats Engine will offer an additional model for customers in regulated environments and/or with well-defined traffic over specific time durations.

  • Idead for regulated environments
  • Supports pre-defined test durations
  • Enables consistent, auditable analysis

Fixed Horizon Sample Size Calculator Feature Experimentation

Note: This capability is available for customers using Optimizely Analytics with data warehouse native configurations. Rolled out to Feature Experimentation customers with warehouse-native experimentation and coming soon to Web Experimentation customers.

Next Generation Experimentation Analytics

Unlock deeper insights by bringing advanced analytics directly into experimentation. The new results page and expanded exploration tools from OA become available for WX and FX.

Key benefits:

  • Access richer analytics directly within experimentation workflows.

  • Use the new results page to connect test outcomes with deeper exploration tools.

  • Unify analytics and experimentation so test results become data-driven stories.

enhanced experiment results page

Images are for illustrative purposes and subject to change.

New Experimentation Agents

  • Experiment Idea Building Agent: Captures your business goals, web analytics and experimentation history and turns them into high quality experimentation ideas.​​

  • Experiment Plan Creation Agent: Takes your team’s test ideas and transforms them into ready-to-build test plans that they be sent directly to JIRA. ​​

Experiment Test Plan report
That's all for now! Scroll down to see what features are generally available now.

Read our Release Notes for more information on all releases.

Previous updates

Experiment Review Agent

Released

For Product Leaders, the Experiment Review Agent in Feature Experimentation is designed to accelerate your path to data-driven product decisions and ensure successful launches.

  • Validate Launches: Proactively identify and fix setup errors, reducing the risk of flawed experiments and ensuring robust QA.

  • Smarter Decisions: Leverage AI-driven insights to optimize experiment design, increasing your odds of finding winning features that drive impact.

  • Measure True Impact: Confirm accurate metric tracking to clearly understand the real-world performance and ROI of your feature releases.

  • Precise Control: Ensure accurate audience targeting and feature flag rules for reliable experimentation and confident rollouts.

Unlock smarter product decisions, reduce guesswork, and clearly tie product releases to customer engagement and revenue impact.

Before launching an A/B test, Opal can review your experiment and provide recommendations

Baseline Variation Setting

Released

Previously, customers manually adjusted the baseline on the results page, and this selection wasn't persistent. Now, you can define your preferred baseline before launching an experiment, saving you time when navigating to the results page.

  • Pre-configure Your Baseline: Select any variation as your aseline before starting an experiment, ensuring your results are always viewed from your desired comparison point.
  • Results View: The results page will automatically open with your chosen baseline, eliminating the need to manually filter each time.
  • Flexible Analysis: While your pre-selected baseline provides a default view, you still have the option to filter the baseline on the results page for ad-hoc analysis.

Select a variation as your control before launching an experiment

Optimizely Edge Agent

Released

Run experiments entirely at the edge in Feature Experimentation as simply as possible, without the need to implement SDKs or compromising on performance, scalability, or flexibility.

  • Minimize Latency and Flicker: Reduce latency and eliminate flicker by running experiments entirely at the edge.

  • Simplify with Edge SDK: Avoid complex SDK implementations using our comprehensive Edge SDK with built-in decision caching.

  • Serverless Scalability: Deploy the Agent in a serverless environment, easing infrastructure scaling concerns and improving efficiency.

map showing regional servers and edge servers that are closer to the end user