Over the past three months, our interns have been exactly that — an investment in what’s next. From day one, they became part of our everyday work life: joining daily discussions, absorbing our way of thinking, and contributing to the culture that shapes how we work together. They weren’t just here to observe; they were here to participate.
Guided by our internal mentors, the interns took on a real project and helped turn ideas and policy documents into a working, AI-driven system. Along the way, they brought fresh perspectives, curiosity, and energy — and reminded us why mentorship and learning go both ways.
Below is a recap of their presentation, highlighting the approach, key decisions, and lessons learned throughout the project.
An Iterative Development Cycle — With a Twist
Instead of starting with clear requirements, the team began with uncertainty — no structured documentation, no predefined user stories, and no single person to ask for answers. The only source of truth was a set of internal policy documents, a scenario that closely mirrors real-life consulting challenges.
Once the direction was set, the team still followed our agile way of working:
- Requirement extraction
- Design phase
- Development phase
- Short feedback cycles
- Iteration and refinement
This iterative approach allowed them to move in small steps, review decisions often, and continuously refine the solution as their understanding evolved.
From Policies to User Stories Using AI
Because of the lack of clear requirements, the analysis phase itself became the first challenge to solve. Rather than handling it manually, the team decided to use AI, specifically a Retrieval-Augmented Generation (RAG) approach based on large language models.
Before they could extract requirements, they first had to build the RAG system itself. This became the project’s first phase: using internal policy documents as a knowledge base to generate meaningful user stories.
By doing so, the team demonstrated how AI can support early-stage analysis and help bring structure where none initially exists — turning static documents into actionable development inputs.
Learning First, Automation Second
AI-assisted code generation was explored and used to a certain extent. However, the primary goal of the internship was learning, not automation. For that reason, the implementation phase was handled mostly in a traditional way, with limited AI assistance.
The result was a Spring Boot RESTful backend, built using standard controllers, services, and repositories. This structure ensured clarity, maintainability, and extensibility.
The backend addressed a real, everyday use case: managing vacation requests. Employees can submit requests through a single system, which then routes them to the appropriate consultant managers or team leads, replacing informal messages and manual tracking with a clear and consistent process.
Rethinking the Frontend: A Conversational Interface
At that point, a new question emerged: should the team build a traditional frontend?
Once again, they chose to step away from the conventional approach. Instead of a classic UI, they implemented a chatbot that communicates directly with the backend and helps users complete tasks through conversation. This interaction was powered by an MCP server, acting as the bridge between the chatbot and the backend system.
This experiment explored a direction where user interfaces are increasingly conversational rather than form-based, a trend that is likely to become more common in the future, and one the team wanted to test and demonstrate firsthand.
AI as a Living Part of the System
One of the most engaging discussions during the presentation came during the Q&A session, when the team was asked how the system would handle changing company policies. Their answer was transparent: in its current form, updates still require manual intervention and code changes.
However, they also outlined a forward-looking vision. By further improving the requirement-analysis bot and keeping the knowledge base up to date, new or modified policies could be reprocessed automatically. This would allow the system to regenerate user stories and suggest changes — treating policies as evolving inputs rather than static documents.
Local Models: Trade-Offs in Practice
The team also experimented with running language models locally to ensure the security of internal documents. This approach offered clear benefits but also introduced challenges:
- Slower response times, sometimes up to 15–20 minutes
- Higher system resource usage
- Dependence on the developer’s local machine
Despite these trade-offs, the architecture allowed models to be swapped or reconfigured through simple property changes, making experimentation possible without touching the core codebase.
Closing Reflections
The presentation concluded with reflections on the learning experience. Beyond the technical outcomes, the project showed how internships can be spaces for real learning, growth, and meaningful contribution.
It highlighted the value of curiosity, adaptability, and collaboration, not just in building systems, but in shaping professionals who are ready for the challenges ahead.
We’re genuinely grateful to have had them with us. Their time at Nion may have been three months, but the impact they made — and the future they’re helping build — goes well beyond that.