Driving AI Adoption through transforming work

Baris Kavakli

From 200 possibilities to 5 priorities — in a single afternoon.


The Problem

Every enterprise has the same AI challenge: too many ideas, not enough focus. Teams brainstorm dozens of use cases on whiteboards and sticky notes, but six months later, nothing has been built. The gap between “we should use AI for that” and actually deploying it is where most adoption programmes die.

The root cause is simple. Traditional workshops produce unstructured output — scattered notes, subjective opinions from the loudest voices, and no clear link between what people want and what’s actually feasible. You get the opinion of the few instead of the wisdom of the crowd. Leadership gets a PowerPoint summary. The organisation gets nothing.

What We Built

Portera’s AI Use Case Workshop Platform is a browser-based application that turns a half-day facilitated session into a structured, data-driven AI roadmap. Participants work on their own devices while a facilitator guides them through six phases — from discovery to a final prioritised shortlist with detailed implementation specifications.

The platform combines a pre-loaded use case catalog, real-time collaboration in small pods, an Impact x Viability prioritisation matrix, and AI-assisted documentation — all in a single interactive session.

No installations. No accounts to create. Participants open a link and start working.

How We Classify AI Use Cases

Most AI use case lists are flat — a spreadsheet of ideas with vague labels like “automation” or “analytics.” That makes prioritisation impossible because you’re comparing apples to engines. You need two independent dimensions to place any use case on a map that actually means something.

We built a classification framework with two axes.

The Y-axis: How much freedom does the AI have?

This is the autonomy gradient — it describes the relationship between the AI and the human, not what the AI does technically.

Level What it means
Assist AI helps humans work better. The human controls the output. Think: a copilot that drafts a summary you edit before sending.
Advise AI recommends actions. The human decides and executes. Think: a system that suggests the three best promotion scenarios — you pick one.
Execute AI takes action autonomously within defined bounds. Think: an agent that auto-posts matched invoices and only routes exceptions to a human.
Orchestrate  AI manages complex, multi-step workflows end-to-end. Think: a system that coordinates supplier onboarding across procurement, legal, and finance without a human quarterbacking every handoff.

The gradient matters because it determines governance requirements, change management effort, and organisational readiness. An Assist-level use case can go live in weeks. An Orchestrate-level use case needs months of process redesign. Knowing where a use case sits on this axis tells you what kind of commitment you’re signing up for — before you write a single line of code.

The X-axis: What cognitive work does the AI actually perform?

This is the function pipeline — six distinct types of cognitive work, ordered from data input to deliverable output.

Function What AI does
Extract Raw data → structured data. Turns documents, images, speech, and sensor feeds into usable information.
Detect Structured data → signals. Spots anomalies, deviations, risks, and threshold breaches.
Analyze Signals → insights. Synthesises data, finds patterns, reasons about cause and effect.
Forecast Insights → predictions. Models future states, scenarios, and probabilities.
Optimize  Predictions + constraints → best action. Finds the best option among alternatives.
Generate  Best action → deliverable output. Creates documents, content, code, plans, or designs.

The six functions form a natural pipeline: AI first extracts structure from messy inputs, then detects what’s unusual, analyses why it’s happening, forecasts what will happen next, optimises the best course of action, and generates the deliverable. Each step feeds the next — but any use case typically lives in one primary function.

Why two independent axes?

The axes are deliberately orthogonal. Any cognitive function can operate at any autonomy level. An AI that detects fraud can assist a human analyst (Assist), or it can block a transaction in milliseconds without asking (Execute). An AI that generates a report can draft it for your review (Assist), or it can assemble, validate, and distribute it end-to-end (Orchestrate).

This independence is what makes the framework useful for prioritisation. When workshop participants place a use case on the matrix, they’re answering two separate questions: what should the AI think about? and how much should we trust it to act? Those are different conversations with different stakeholders — and conflating them is where most AI roadmaps go wrong.

338 use cases, pre-classified

Our catalog contains over 300 inspiration use cases across eight industries — FMCG, Retail, Manufacturing, Financial Services, Energy & Utilities, Healthcare & Life Sciences, Logistics & Transportation, and Professional Services. Every use case is pre-classified on both axes, sourced from public deployments, and backed by real company examples. Workshop participants don’t start from a blank page. They start from a structured map of what’s already working in their industry.

How It Works

Phase 1 — Discover (~3 min)

Participants browse a curated catalog of AI use cases on their own device. The catalog draws from multiple sources: existing deployments within the organisation, use cases submitted by employees, external inspiration cases from industry, and cases surfaced during prior research (shadowing sessions, interviews, intake forms). Each use case includes a description, autonomy level, AI function, department fit, expected value, and complexity rating.

Phase 2 — Individual Selection (~12 min)

Each participant privately selects their top 3 use cases across three scopes: Me (personal productivity), We (team-level), and Us (organisation-wide). They can also propose new use cases not in the catalog. Selections are anonymous — the facilitator sees completion status but not who picked what.

Phase 3 — Pod Consolidation (~25 min)

Participants form pods of 3–5 people. They discuss their individual picks, debate priorities, and align on a final 3 use cases per pod. The pod leader submits the consolidated selection through the platform. The facilitator monitors pod progress in real time — seeing which pods are working, which have submitted, and which need attention. Pod leaders can signal “we need help” directly from their screen.

Phase 4 — Impact x Viability Matrix (~30 min)

All selected use cases flow into a facilitator-led prioritisation session. The facilitator drags use cases onto a 2×2 matrix (Impact vs. Viability) on the main screen while challenging the room: How many people does this affect? What systems are required? What’s the realistic timeline? The matrix makes trade-offs visible and forces honest conversation. Individual picks that didn’t make the pod cut remain accessible — nothing is lost.

Phase 5 — Deep Dive (~60 min)

The facilitator assigns the highest-priority use cases back to pods for detailed specification. Each pod documents: the problem being solved, the proposed solution, expected outcomes, success metrics, required data sources, system dependencies, and stakeholder impact. Built-in AI assistance helps participants structure their thinking — they write rough notes and the platform converts them into well-documented specifications. Multiple pod members can work on different use cases simultaneously.

Phase 6 — Final Vote (~10 min)

Every participant gets 3 votes. They review the deep-dive outputs and vote independently on the use cases they believe should be prioritised. Vote counts appear as bubble sizes on the final board, giving the room an instant visual consensus. The facilitator can still adjust Impact and Viability scores based on final-round discussion.


What You Walk Out With

  • A ranked priority list of AI use cases backed by structured group consensus, not opinion
  • Detailed use case specifications for every priority — problem, solution, outcome, metrics, data requirements, and dependencies
  • Impact and viability scores for each use case, mapped on a visual matrix
  • Organisation-wide vote counts showing where genuine demand exists
  • A complete audit trail of individual selections, pod decisions, and final votes

Everything the organisation needs to move from “workshop” to “roadmap” without another month of synthesis.

Why This Approach Works

Every voice is heard

Individual selection happens privately before group discussion. This eliminates the bias where senior voices dominate and quieter team members self-censor. The data shows what people actually think, not what they’re comfortable saying out loud.

Structure replaces subjectivity

The six-phase flow is designed to progressively narrow focus. 200 use cases become 12 pod picks, then 6-8 matrix-prioritised cases, then 4-5 deep-dived specifications, then a final vote. Each step has a clear input, a clear output, and a time constraint.

Speed without sacrifice

A traditional consulting approach takes weeks of interviews, analysis, and report-writing to produce what this platform generates in 2.5 hours. The AI-assisted deep dive means participants don’t get stuck writing — they focus on thinking.

Built for multi-country scale

Run the same workshop across different countries or business units, then consolidate results. Use cases that surface independently in multiple markets signal the highest cross-organisational value. The platform tracks provenance so you can see which priorities are universal and which are market-specific.

Replicable at every level

The format works for corporate functions (Finance, HR, Marketing, Operations) and for frontline operations (stores, warehouses, distribution centres). The use case catalog is customised per audience; the workshop flow stays the same.


Who It’s For

Heads of AI / CDOs / CDAOs who need to move from strategy to execution and want a defensible, participatory process for selecting where to invest.

Functional leaders (VP Operations, CHRO, CMO) who know AI can help their teams but need a structured way to identify and prioritise the right use cases with their people — not for them.

Country or regional leaders in global organisations who need to localise a global AI strategy while maintaining consistency across markets.

AI programme teams running adoption at scale who need a repeatable, efficient format that produces comparable output across dozens of sessions.


How We Deliver It

The platform is part of Portera’s AI Adoption Programme — not sold as standalone software. We provide:

  1. Pre-workshop research — shadowing sessions, employee intake surveys, and industry benchmarking to build a tailored use case catalog before the workshop begins
  2. Facilitated workshop delivery — our consultants run the session, manage group dynamics, and challenge assumptions in real time
  3. Post-workshop synthesis — cross-country consolidation, final roadmap, and implementation recommendations
  4. Use case implementation — we build and deploy the prioritised AI agents on your platform of choice, from Microsoft Copilot and Azure AI to custom solutions

Portera holds the Microsoft Advanced Specialisation in Data & AI — we don’t just identify the right use cases, we build them.

The platform is the engine. The consulting expertise is what makes it produce results.


Proven in Practice

Built and battle-tested with a Fortune 500 global retailer across multiple countries and departments — from Commercial Excellence to HR, from corporate offices to store operations. The format has been validated with senior leadership, functional teams, and frontline managers.

Client feedback from pilot sessions:

“Super nice, super intuitive. Everything gets tracked across the different steps. People will like it — and it’s something they’d be willing to replicate.”

“This is a very nice platform. I’ve never seen something like this.”


Portera — AI & Data Consulting

From Data to Outcomes.