← Back to Projects

TNE.ai Dashboard — AI Model Monitoring Platform

Time frame: May 2025 – June 2025

Course: CS 394 Agile Software Engineering

Collaborators: Computer Science Major (x5)

Overview

TNE.ai is a web application designed for ML operations teams to monitor AI model performance across deployments. The platform transforms low-level logs and metrics into actionable insights through centralized visualization and analysis.

TNE.ai product overview
TNE.ai product overview — monitoring AI model performance, diagnosing errors, and optimizing business-specific needs

Understanding the Problem

Modern AI engineers rely on multiple disconnected tools to monitor latency, system errors, and engagement metrics. When performance issues arise, identifying root causes often requires manually searching through logs across systems — a slow and fragmented process.

Our goal was to design a centralized dashboard that transforms scattered low-level metrics into actionable insights, allowing teams to diagnose issues quickly and shift from reactive troubleshooting to proactive monitoring.

TNE.ai four panel storyboard
From fragmented log monitoring to a unified, proactive dashboard — the core problem TNE.ai was built to solve

Agile Development Across Teams

Development occurred across multiple teams working simultaneously toward a shared product vision. Rotating product owners coordinated priorities each week, ensuring alignment between engineering progress and client expectations.

Weekly client meetings and tribe-wide Slack communication allowed rapid feedback cycles. Agile tracking through burnup charts and shared backlogs helped balance feature scope, manage dependencies, and maintain steady progress toward demo milestones — enabling frequent integration while minimizing merge conflicts.

Multi-team development structure
How the team structured client communication, product ownership, and cross-team coordination
Project status artifacts
Release burnup chart and backlog tracking story points completed across two iterations

System Architecture and Implementation

The platform processes uploaded JSON log data through a structured three-phase pipeline designed to convert raw system activity into interpretable performance metrics.

Log data is first parsed and standardized into structured query records. The system then aggregates performance indicators such as latency distributions, failure rates, and response patterns. Finally, the dashboard visualizes trends through interactive charts that allow engineers to drill into model behavior across deployments — ensuring complex operational data can be interpreted quickly without manual log inspection.

TNE.ai dashboard overview
Dashboard overview showing parsed file metadata, query statistics, response times, and flagged warnings from uploaded JSON logs

Outcome

The final product delivered a unified monitoring experience capable of visualizing performance trends across AI systems while enforcing standardized data ingestion through structured JSON uploads.

Improved navigation and dashboard readability
Standardized data formats for consistent analysis
Stronger collaboration workflows across distributed teams
TNE.ai final dashboard
Final TNE.ai dashboard — JSON upload entry point for AI model performance monitoring

Key Learnings

The project emphasized the importance of communication, rapid iteration, and balancing technical ambition with delivery timelines in a multi-team environment.

Communication

Frequent Slack updates and client meetings kept distributed teams aligned and reduced integration conflicts.

Scoping Tradeoffs

Dropping LLM-generated recommendations allowed the team to focus on delivering a reliable, polished core product.

Iteration Speed

Rapid feedback cycles and shared backlogs helped balance feature scope with demo milestones.

Final tribe slice
Final tribe slice showing completed focuses, dropped stories, and integration decisions
← Previous ProjectNext Project →