SERVICES

Structured advisory support for institutions responding to artificial intelligence in healthcare education

Astavalence works with medical and healthcare education institutions that need clearer, more responsible, and more practical approaches to artificial intelligence. Services are structured around four areas where institutions are already feeling pressure: governance, curriculum, assessment, and faculty capability.

Typical engagement windows
Rapid advisory reviews: 2–3 weeks
Focused service projects: 4–8 weeks
Strategic support: term-based or ongoing
Typical outputs
Institutional briefing papers
Curriculum and assessment recommendations
Faculty development sessions
Implementation guidance and next-step plans
Designed for
Medical schools
Healthcare faculties
Programme leadership teams
Institutions preparing for practical AI adoption
Governance
Institutional pressure
Institutions need clearer validation, policy, and implementation frameworks
International guidance has already highlighted that educational institutions remain underprepared to validate generative AI tools properly, particularly as regulation continues to lag behind deployment.
Curriculum
Educational pressure
AI literacy is expanding faster than standardised curricular approaches
Recent medical education review work shows growing activity in AI teaching, but continued absence of consensus on competencies, ethical frameworks, and consistent curricular design.
Assessment
Regulatory pressure
Assessment design and security now require explicit AI era thinking
In regulated exam contexts, artificial intelligence is already being treated as a threat to assessment integrity, reinforcing the need for review of format, security, and defensible design.
Faculty development
Capability pressure
Staff need support that covers capability, ethics, and practical use
Doctors have identified needs for broad AI education, system specific training, and stronger support around ethics and data protection, exactly the areas institutions are now being asked to address.

ENGAGEMENT AREAS


Four structured ways institutions typically work with Astavalence


Services are designed to support institutions at different stages of AI adoption, from early strategic review through to more focused curriculum, assessment, and faculty development work. Each engagement is scoped clearly, with defined outputs and realistic delivery windows.

01
Governance
AI governance and institutional readiness review
For institutions that need a clearer picture of where they stand, where the risks sit, and what a more defensible approach to AI adoption could look like across policy, practice, and educational delivery.
Typical scope
Review of current institutional position, emerging priorities, governance gaps, implementation pressures, and practical areas requiring clarification.
Indicative outputs
Readiness summary, briefing paper, risk and opportunity framing, and prioritised next step recommendations.
Typical engagement window
2 to 4 weeks
02
Curriculum
Curriculum integration and AI learning objectives
For programme teams that want to move beyond informal AI exposure and begin shaping more deliberate educational responses, including curriculum mapping, learning aims, and responsible use expectations.
Typical scope
Review of existing curriculum touchpoints, opportunities for AI literacy, ethical and professional themes, and alignment with programme aims.
Indicative outputs
Curriculum recommendations, draft AI related learning objectives, integration map, and suggested implementation priorities.
Typical engagement window
3 to 6 weeks
03
Assessment
Assessment review in the context of generative AI
For institutions reviewing where assessment design may now be vulnerable, unclear, or misaligned in an AI enabled learning environment, particularly around authenticity, integrity, and defensible redesign.
Typical scope
Review of selected assessment formats, potential pressure points, existing guidance, and opportunities for clearer design or policy refinement.
Indicative outputs
Assessment observations, redesign recommendations, guidance considerations, and practical next step options for programme teams.
Typical engagement window
3 to 5 weeks
04
Faculty development
Faculty development and implementation support
For leadership teams that need staff confidence to catch up with institutional ambition, through structured sessions that address practical use, educational judgement, professional responsibility, and implementation realities.
Typical scope
Focused workshops, briefing led sessions, discussion of opportunities and risks, and tailored support around institutional priorities.
Indicative outputs
Session delivery, tailored slide material, staff facing guidance points, and recommended areas for further capability development.
Typical engagement window
2 to 4 weeks

HOW ENGAGEMENTS WORK

A clear process from initial conversation to defined institutional output

Engagements are designed to be focused, structured, and proportionate to institutional need. Whether the brief is strategic or more targeted, the process is intended to give leadership teams clarity at each stage, with scope agreed early and outputs made explicit from the outset.

Step 01
Initial conversation
Understanding context and institutional priorities
A focused conversation to understand strategic context, internal pressures, existing questions, and the area where support would be most useful.
Step 02
Scoped proposal
Defining scope, outputs, and delivery window
A clearly shaped scope is developed around the brief, with agreed objectives, indicative timing, and defined outputs so expectations remain clear from the outset.
Step 03
Focused delivery
Work delivered against the agreed brief
Delivery is kept proportionate and focused, whether the engagement is a governance review, curriculum support piece, assessment review, or faculty development brief.
Step 04
Defined outputs
Clear recommendations and practical next steps
Institutions receive defined outputs with clear recommendations, allowing leadership teams and programme teams to act with greater confidence and direction.

Engagements can remain tightly focused or form part of a broader programme of support, depending on internal readiness, institutional priorities, and the scale of the brief.

DEFINED OUTPUTS

Examples of the kinds of outputs institutions may receive

Each engagement is scoped around a defined brief, which means outputs are made clear early. Depending on the nature of the work, institutions may receive focused written recommendations, mapped teaching proposals, staff development materials, or practical implementation guidance designed to support next steps.

Output 01
Briefing
Institutional briefing papers
Focused written summaries that clarify issues, frame opportunities and risks, and support internal discussion at leadership or programme level.
Typical use
Governance reviews, early strategic conversations, and decision shaping.
Output 02
Curriculum
Curriculum mapping and integration recommendations
Structured recommendations showing where AI related themes, learning objectives, and professional considerations may sit within the curriculum.
Typical use
Curriculum review, educational planning, and staged implementation.
Output 03
Assessment
Assessment observations and redesign guidance
Clear commentary on selected assessment formats, pressure points, and opportunities for more robust design in an AI enabled context.
Typical use
Assessment review, policy refinement, and practical redesign discussion.
Output 04
Faculty
Staff development materials and session delivery
Tailored teaching materials, guided sessions, and practical discussion points to support staff confidence and implementation readiness.
Typical use
Faculty development, capability building, and local implementation support.
Output 05
Planning
Prioritised next step recommendations
A practical route forward that helps institutions decide what to address first, what can wait, and where internal effort is best directed.
Typical use
Post review planning and leadership decision making.
Output 06
Implementation
Practical implementation notes
Clear guidance intended to help internal teams turn recommendations into action in a way that remains proportionate and realistic.
Typical use
Internal follow through, project shaping, and staged adoption.

Outputs vary according to scope, but engagements are always designed to leave institutions with practical material that can support clearer internal action.

WHERE THIS WORK BECOMES MOST USEFUL

The moments when institutions often need a clearer response to artificial intelligence

This work tends to become most useful when artificial intelligence is no longer just an interesting topic, but a practical educational pressure. The section below is designed to help leadership teams recognise those moments quickly.

Policy pressure
Governance
Governance and policy questions are becoming harder to postpone
Useful when institutions need clearer thinking around responsible use, internal position, policy direction, and the wider implications of adoption.
Curriculum pressure
Teaching
Curriculum teams need more deliberate educational placement
Useful when programme teams are starting to ask where AI literacy, ethics, professional judgement, and responsible use should sit within teaching.
Assessment pressure
Integrity
Assessment approaches no longer feel fully secure or sufficient
Useful when learner use of generative tools is beginning to challenge authenticity, clarity, or defensible assessment design.
Capability pressure
Faculty
Faculty confidence is not yet keeping pace with institutional need
Useful when educators need structured support so that implementation feels informed, proportionate, and educationally credible rather than reactive.
Readiness pressure
Strategy
Readiness is still emerging, but expectations are rising
Useful when awareness is growing, but internal clarity, direction, and practical planning are still incomplete.
Implementation pressure
Delivery
Implementation needs to become more deliberate and proportionate
Useful when leadership teams want focused advisory work with clear scope, realistic outputs, and specialist educational understanding.

The common thread is the point at which artificial intelligence begins to require clearer institutional judgement and more deliberate educational action.

NEXT STEP

If your institution is beginning to ask more serious questions, this is a sensible point to start the conversation

Initial discussions can be used to clarify context, likely scope, and the kind of support that would be most useful. In some cases that may lead to a focused advisory piece. In others, it may simply help determine the right starting point.

Conversation
Clarify the context
A first discussion can help define the institutional questions, pressures, and priorities that matter most.
Scope
Shape the right brief
The conversation can be used to determine whether a focused review, a specific service area, or a broader piece of support would be most useful.
Direction
Identify a realistic starting point
In some cases, the most valuable outcome is simply greater clarity on what should happen first and what can follow later.

The aim is not to overcomplicate the response, but to help institutions move with greater clarity, proportion, and educational judgement.