Using AI to “cognify” your Apps

Today terms like Artificial Intelligence and Machine Learning are used interchangeably to describe systems that possess capabilities not easily implemented as heuristics; applications such as speech recognition, prediction or computer vision fit into this space. From an industry perspective however, we rarely discuss feature level strategies and instead try to design into a system a set of cognitive capabilities which leverage AI, ML, and a range of associated patterns and practices.

So how do you “cognify” your apps? There are two major areas to consider.

What is the cognitive scenario you’re building for?

While cognition is more of a marketing term these days, when product teams begin to think about using it, it becomes more of a solution looking for a problem. A better approach is to consider your key user scenarios and ask yourself a few questions:

  1. Can we leverage behaviours from other users to help a new customer use the system? For example, using collaborative filtering to suggest a set of starting options for a user at the beginning of a “new” interaction flow.
  2. Are there tasks we can perform on behalf of the user using their prior interactions that eliminate the need for certain UI components? A classic example of this is intensive data entry applications. Initially the user is presented with data entry screens, but over time you could train a network to fill in the forms on behalf of the user. It’s true you could also build a heuristical approach to this using regex statements to map values to field identifiers like css paths or xpaths, but as most people find, the edge cases with these approaches can become difficult to maintain and manage over time.
  3. Can we learn workflows within the application by observing the user’s interaction flows? Routing and approval paths are a perfect scenario for this, rather than users having to create static rulesets for where data flows and the actions/people associated with these flows, the system can track the actual workflows where people submit emails to one another or include users in collaborative efforts. This is more dynamic and requires less static configuration which means it will scale better and require less administration.

The above exercises are just examples of how to think about cognition in your product. Ultimately you are trying to discover user scenarios where the system is supporting decision making or automating repetitive, time consuming tasks that cannot be achieved through heuristical methods.

What does your cognitive engineering lifecycle look like?

The software development lifecycle (SDLC) and application lifecycle management (ALM) are terms used to describe the development and deployment of engineering based features. With the exception of data stores, you applications should be stateless, so the process in which they get designed, developed, deployed and maintained is based on this premise.

Cognitive systems are very different in that the data processing and statefulness is inextricably linked to the “code”. The code in this case is a pipeline of feature engineering, model training, model performance evaluation, and runtime processing capabilities. Then there is the model artifacts, which are separate to the raw data they are built from.

As such, you need to think about your cognitive engineering lifecycle, both in terms of development and deployment. This gives rise to the concept of “cogops“, the method by which business analysts, data scientists, software engineers, and operations teams manage a cognitive feature throughout it’s lifecycle.

At a high level you need to be thinking about:

  1. How do I integrate business domain experts, data scientists, and engineers using tools and automation to ship end to end features.
  2. How do you test locally and deploy into upstream environments with only env configuration changes.
  3. How will you assess the performance of new model versions and automate the process for upgrading these models in upstream environments.
  4. How will new data enter the lifecycle, be used to retrain or extend existing models, and how will these new signals be consumed.

While there are many aspects of engineering devops such as monitoring and alerts that apply to cognitive features, there are new processes such as training and performance evaluation which are new.


It’s important to start thinking about your cognitive investments as highly integrated pieces of your architecture and not as offline silos. It’s one thing to run an R report and sneakernet that to the appropriate stakeholders for evaluation, and another thing to mainline these capabilities into your development processes and your apps.