ROADMAP & RELEASES

Latest updates for Feature Experimentation

From feature flagging to testing and release management, check out all the latest-and-greatest coming (and recently released) to Optimizely Feature Experimentation. 

chart

Now streaming!

Our Summer '25 Product Roadmap Series.

Get a first look at what’s coming from Optimizely, shared by our product leaders in a fresh, on-demand format. The perfect fit for your busy summer schedule!

Start with the ultimate Optimizely One Intro to the Series which even includes some Opticon teasers. Then head over to our sessions which span all the hottest topics of the summer: AI, Analytics, Testing & Personalization, Content, and Commerce.

Jump in!

And now on to our product updates...

Experimentation MCP Server

Ensure that flags and experiments are properly reviewed and approved before changes are published.

  • Effortless Integration: Create flags, configure experiments, and debug issues without switching to the Optimizely dashboard.
  • Streamlined Workflow: Eliminate context switching and complex API orchestration with simple, conversational commands.
  • Accelerated Development: Enhance development workflows with instant local performance and intelligent code generation.
  • Join the beta here

Example of developer asking the Experimentation MCP Server if I have any stale flags that should be cleaned up in my feature experimentation project

Holdout Groups

Measure and report on the true, cumulative impact of your experimentation program. Holdout Groups allow you to set aside a percentage of your audience, intentionnaly excluding them from tests. This creates a pure, unbiased control group against which you can compare the aggregate results of your entire experimentation strategy.

  • Measure Cumulative Impact: Understand the overall uplift and true business value generated by your complete experimentation program.
  • Unbiased Comparison: Compare results from the control group to those who see the feature / variation changes, providing clear insights into net effect.
  • Holistic Performance Insights: Gain a deeper understanding of how much your experiments and features are collectively contributing to your goals.
  • Maintain Measurement Integrity: Run multiple tests simultaneously without compromising the accuracy of your program-level measurement.

A cumulative impact report that shows how our primary metric "Form Submission" performed from those not in the holdout group to those in the holdout group that did not see this experiment and measure the improvement percentage.

Change Approvals

Change Approvals safeguard the quality and safety of feature flag and experiment changes, ensuring that every update is thoroughly vetted before release.

  • Approval Workflows: Set up granular approval actions for individual flags and experiments to maintain control over changes.
  • Team-Based Safeguards: Assign the right team members to approve or reject changes, adding a layer of accountability.
  • Streamlined Collaboration: Use intuitive workflows to manage and track approval processes, facilitating smooth teamwork.
  • Change Rationale: Accept or reject proposed changes with clear justifications, ensuring transparency.
  • Email Notifications: Keep requesters, approvers, and "watchers" informed with timely email updates throughout the approval process.

Optimizely Edge Agent

Run experiments entirely at the edge as simply as possible, without the need to implement SDKs or compromising on performance, scalability, or flexibility.

  • Minimize Latency and Flicker: Reduce latency and eliminate flicker by running experiments entirely at the edge.

  • Simplify with Edge SDK: Avoid complex SDK implementations using our comprehensive Edge SDK with built-in decision caching.

  • Serverless Scalability: Deploy the Agent in a serverless environment, easing infrastructure scaling concerns and improving efficiency.

  • Join the beta here

regional-vs-edge-server.png

Contextual Bandits

Unlock true AI-powered 1:1 personalization.

  • Reduce guesswork and drive conversions by serving your visitors the most optimal and effective experience for them
  • Customize bandit algorithms to automatically personalize user experiences
  • See which attributes were used to decide which variation to show users
  • See how each variation performed, and see traffic allocation over time per variation

Screens are for illustrative purposes only and subject to change.

'Fixed Horizon' Stats Engine

A widely-used methodology for analyzing experiment results over the duration of a set period of time, the 'Fixed Horizon' Stats Engine will offer an additional model for customers in regulated environments and/or with well-defined traffic over specific time durations.

  • Idead for regulated environments
  • Supports pre-defined test durations
  • Enables consistent, auditable analysis

Fixed Horizon Sample Size Calculator Feature Experimentation

Note: This capability is available for customers using Optimizely Analytics with data warehouse native configurations.


That's all for now! Scroll down to see what features are generally available now.

Read our Release Notes for more information on all releases.

Previous updates