Simplifying the journey from AI’s theoretical potential to its real-world application: a project framework.

By Deepak Damodarr, Data Office Lead, Neos Systems

At Neos Networks, we’re a leading telecoms provider on an ambitious journey of growth. Like many businesses today, we’re excited to explore the potential of AI technologies to enhance customer service and accelerate our development. However, our guiding principle is always the strategic optimisation of our resources.

The advent of generative AI has reignited interest in data, leading to a surge in requests for AI-driven solutions. Our challenge lies in meeting these demands without compromising our strategic objectives or straining our resources. This involves a careful balance of managing expectations and educating our stakeholders on the realistic capabilities and limitations of AI within our operations.

To navigate these waters, we’ve developed an AI Project Framework aimed at providing a clear roadmap for developing and deploying AI models. It serves as a bridge between theoretical potential and practical application, and helps us to ensure we’re only investing in projects that support our growth journey.

The AI Project Framework

The project framework consists of three core phases:

Discovery phase which clarifies the business problem and assess the practical feasibility of solving it with an AI model,

Compliance and approval during which key stakeholders assess the security, ethical and governance implications of the project, as well as its expected value, and finally,

Development and delivery.

Let’s take a look at each phase in detail.

Infographic outlining the key steps in an AI project framework: 1. Qualify the use case to understand the business problem and assess if AI is the right tech to solve it with. 2. Technical and data assessment to assess availability, accessibility and accuracy of the data. 3. Triage data to remove sensitive data 4. Sign off by governance and legal 5. Sign off by board, develop and deliver.

Laying the Groundwork: Discovery Phase

The discovery phase is the initial leg of the journey and a critical period of exploration and understanding that will set the right direction of our AI project

1. Use Case Qualification

Every journey begins with a question. Here, we dive deep to understand exactly what problem we’re trying to solve with AI. It’s not just about whether AI can be used, but whether it should be used. This can lead to some challenging conversations and hard truths about what’s really needed versus what’s simply desirable but is key to delivering the optimal solution for the business problem.

2. Data and Technical Assessment

This is where we begin to look at the data itself – where is it? Is it accessible? Is it in good shape? Can we even use it the way we need to? It’s not uncommon to discover there is no source system for the data or that it lies outside of our infrastructure, which will require us to go through extra steps of access permissions and identify potential technical, legal and security limitations. These are major hurdles that projects often fall at.

Data Quality

As we know, good data is the foundation of any AI model; without it AI is simply not feasible. Here, we use off-the-shelf data management tools to assess the quality of the data, looking for errors, duplicates and missing data and obtaining a data quality index for the overall set that will determine if we can continue to the next step.

3. Triage of Data

With confidence in the quality of the data set, we start to get a clearer understanding of the type of data we’re working with. This step is crucial for navigating privacy concerns and ensuring we’re not stepping over any lines. Here, we use a multi-tagging approach that allows us to detect, hide or filter PII (Personal Identifiable Information), and also classify the remaining data into different classes of sensitivity. In the future, this will be automated, and will pave the way to proactively applying governance to data based on its classification.

At this stage, we will know if this data set is viable to be used to train our desired AI model, but there is one final step in the technical assessment: hosting.

Hosting: A Key Decision 

Choosing where to host our data and AI models is a bigger decision than it might seem. It’s not just about security and compliance (though those are important considerations), but also about agility and efficiency. If resources permit, in-house is best. However, if they do not, we need to look at externalising the hosting of the data and running of model. It can take up to six weeks to assess third-party options, however, the result maybe deploying in a week versus developing the environment in-house which could take three to four months.

It’s a decision that requires a careful balance of priorities and stakeholder engagement.

Compliance and Approval

The next phase of the lifecycle is compliance and approval.

Moving through the compliance checks and technical viability assessments, the decision boils down to answering three questions: “Can we, should we, and is it worth it?

It’s up to the key stakeholders (security, legal, governance and business sponsors) to reach this decision together. The process usually involves several rounds of reviews for which we need to be best prepared, providing all the key information from our assessments to date including project timelines. Here, business owners will also present the value that the project is expected to achieve.

If approved, the final decision to go ahead is made by the board, who also decide the prioritisation of projects.

Development and Delivery

Development is where ideas are finally put into practice. Our DevOps team will create the user stories required to train the model and relatively quickly give us initial output that will indicate if we will succeed. As the model becomes more mature, we will publish the results to a wider audience. At this stage, new scenarios often emerge requiring us to adapt and sometimes even revisit earlier stages of the lifecycle. It’s important to be flexible and anticipate that outcome, while at the same time maintaining open communication about the impact on resource and timeline.

The AI Project Framework is a step towards ensuring that new AI developments are guided by a clear understanding of our capabilities and goals. It will continue to evolve, just as technologies do, but in the meantime, it is a valuable tool to help key stakeholders better navigate the complexities of AI development in a way that’s both strategic and sustainable.

Collaboration both at company and industry level is key to successfully building an AI-driven future. If you have any comments or questions relating to this article, don’t hesitate to reach out to me.

For more on strategic AI integration, download “Orchestrating Your Company’s AI Strategy: A Chief Data Officer’s Guide.

Ensure you get stories like this and many more interesting insights from data and analytics leaders like you – directly to your inbox – by signing up to our newsletter. Would like to become a member? Get in touch!


Make better informed decisions by assessing your data and analytics capability and use community intelligence to close the gap between strategy and execution.


Essential Data monthly newsletter is where you can discover limited availability articles, guides and frameworks otherwise only accessible to our members.