Valamis logo

Skill gap detection system for enterprise learning

Valamis

EdTech

Overview

Build a proof-of-concept that assesses where individual employees actually stand in their skill development and surfaces targeted learning to close the gaps.

The Problem

Companies that define what skills each role requires still rarely have a clear picture of where individual employees actually stand against that list. Learning gets assigned by role, not by need. Gaps go undetected until they surface somewhere costly.

What You'll Build

A system that identifies genuine skill gaps at the individual level and maps them to existing content. The hard part isn't recommending content once a gap is found. It's the identification itself. Job titles, training history, and self-assessment all fall short. Your solution needs a better signal. Format is open: a web app, a Slack bot, a browser extension, or an ambient layer inside an existing tool.

Prizes

  • €1000

Detailed Information

Target group

This challenge is open to everyone. You don’t need a coding background to participate. Organizers and the ShadowStack vibe coding community (the team behind Sohjo Hacks 2026) will be available throughout the hackathon to help you build your idea.

You’re a good fit if you are:

  • A developer or designer interested in building AI-powered applications
  • Someone curious about innovative learning technologies
  • Someone who wants to help other people learn and grow
  • Anyone interested in the future of workplace education

The challenge

Take a junior associate at a law firm in 2026. The skills the role requires have shifted considerably in just a few years: new AI tools to work with effectively, emerging regulatory frameworks around those same technologies, and client expectations shaped by all of it. The firm already has a defined skill taxonomy for the role: a structured list of what associates are supposed to know and be able to do, with expected proficiency levels. What it doesn't have is a clear picture of where this particular person actually stands, which skills are solid, which are developing, and which gaps are hiding beneath the surface.

Manual assessment doesn't scale. A senior partner cannot meaningfully evaluate every relevant skill across their team, and even if they could do it today, the picture would be outdated within months. Skills evolve in step with the industry.

The result is that learning gets assigned by role or by calendar, not by actual need. Skill gaps go undetected until they surface somewhere more visible: a client deliverable, a performance review, an employee who quietly moves on.

The hard part is not recommending content once a gap is identified. That is a lookup problem. The hard part is the gap identification itself: figuring out, at the individual level, which skills are actually weak versus which just have not been exercised recently.

There is also the human side of the equation. Skill systems only work when employees actually use them. Learning cannot happen on someone else's behalf — it has to start with the person's own curiosity and drive to grow. A system that maps gaps accurately but does nothing to connect those gaps to what an employee actually cares about will be ignored. The most effective solutions find a way to make skill development feel personally relevant, not just organizationally useful.

The challenge is figuring out how to continuously understand the actual skill level of individual employees, not just what their role or training history suggests, and how to translate that understanding into learning that meaningfully closes the gaps.

Role-based assignment fails because two people with the same job title rarely have the same skill profile. Self-reported assessments fail because people often cannot accurately gauge their own gaps. Training history fails because completing a module does not mean the knowledge is transferred. What is left is inference from actual behavior or work output, and that raises both technical and practical questions about what inputs are available and how much you can trust them.

Strong submissions address this inference problem directly. A prototype that takes a realistic skill taxonomy, makes a meaningful assessment of where a person stands, and maps that picture to targeted content is hitting the core of the challenge. Both sides matter: the assessment, and the content targeting that follows from it. Judges will be able to interact with a working demo using realistic inputs.

Possible directions

These are starting points, not requirements. Teams are free to take the challenge in any direction that fits the core problem.

  • AI-based skill assessment: an adaptive tool that evaluates where employees actually stand on a skill taxonomy, through scenario-based questions, analysis of submitted work samples, or conversational assessment. The output is a skill profile showing genuine gaps against a defined taxonomy, not just what someone’s role or training history implies.
  • Content-to-skill mapping: a system that takes an existing content library and a skill taxonomy and builds the bridge between them, so that when a gap is identified the right existing content can be surfaced intelligently rather than by keyword match. The challenge is making that mapping accurate and useful at scale.
  • Invisible teammate: a bot or ambient layer that lives inside an existing tool like Slack, monitors the skill health of individuals or teams over time, and proactively delivers relevant content to the right people without being explicitly asked.

Tech note

Teams can use any tools, frameworks, or APIs. Tools like Claude Code, Anthropic Claude API, OpenAI, open-source models, or any other stack are all welcome. Teams arrange their own API credits — no credits are provided as part of the challenge. Simulated inputs are expected. For the skill taxonomy, teams can use ESCO, the European Commission’s open skills and occupations database, rather than inventing their own. A few fictional employee profiles at different levels and a small content catalogue are enough to make a demo compelling. No real enterprise data is required.

Evaluation criteria

Evaluated during the final demo/pitch on 26 March 2026:

  1. Idea fit to the challenge
  2. Readiness of the solution
  3. Innovativeness of the solution
  4. Quality of the demo/pitch delivery

About Valamis

Valamis is a comprehensive enterprise learning platform. Learners use it to browse and complete content, including individual lessons, structured learning paths, and live events, and to track their skill development against defined frameworks. Administrators and subject matter experts use the platform to build and manage those frameworks, curate and integrate content from external sources, and monitor progress and skill coverage across their organization.