The Beginning of AI Revolution & Human Evolution — Part 3b: Business Use Cases & AITaaS

Shankar Balakrishnan
8 min readOct 10, 2023

Continuing from the earlier sections from Part 3 to this blog series, in this section to part 3, I cover AI for Business Use and AI Technology As a Service (AITaaS).

Pic 6 : Gartner Hype Cycle — Overlaid to show Reality Gap

AI for Business Use

This is a topic which many IT Managers, CIOs/CTOs of the company has been deliberating for years, and many even have succeeded implementing AI to some extent in their IT / business systems. Being in the IT industry, I can understand how this new AI revolution might pose difficult challenges in envisioning a roadmap. Even to write about this as part of a blog, I had to write and rewrite, and review to keep it simple and relevant to the current context.

As this technology explodes faster, I theorize that the Gartner quadrant must be repeating itself with multiple waves. This produces a varying trough of gap between reality vs perception of the technology trend as indicated in the picture above.

This makes it challenging for those performing the role of bringing AI into their business due to the continuous challenge of having to keep up with the trend, bringing technical stability into the AI platform, while having zero impact on the business. Few years ago, AI was being implemented across many industries and business functions, especially with the use of deep learning, data centric algorithms, and mostly behind the scenes without the user being aware of the use of these technologies.

Specifically, low-hanging simple use cases in the past has been with elements of AI / Data Science and statistical methods. These served as a good marketing for their own positions, and company brand image. In the current trend of #GenerativeAI, it has taken a different turn though. The challenges in keeping up with such an explosion of technology is leading to an overwhelming feeling of adapting to these changes. This gets added to the usual risks and concerns related to IT — like Information / Cyber Security, Data Privacy, GRC & compliance and increasing costs.

Technology growth cannot be halted. Innovations, and market factors will be the trigger for furthering the use of AI while managing risks and compliance. But in this new era, post ChatGPT, skill availabilities are going to increase with the usage of AI, there are various small to large use cases that companies can start building with the help of AI. There are now companies that have started posting job requirements for “Prompt engineers” and Prompt Engineering is a skillset to acquire.[ Learn @AndrewNg’s course for free ] The tech / internet industry has been pioneers in adopting and adapting quick to the changing scenario. Microsoft, to my view, beat Google and others in taking OpenAPI integration with launch of Copilot for Office 365 and Power Platform to increase productivity, and taking help of AI in coding and software development lifecycle. GitHub (which Microsoft had purchased in 2018), followed suit launching its Github Copilot. For non-tech industry too, since most business processes are digitized, there are varying degree of AI adoption and Copilot approach is being introduced to make the best use of available #GenAI

Let’s look at how organizations can start and continue the momentum in bringing AI/ML into their business purpose.

Use Cases for Internal IT Systems

Applications used internally (employee usage) within an organization are probably a good starting point to look at potential simplest business cases that can be piloted using AI/ML

  • Systems with less exposure to external systems, or third-party APIs
  • Systems with very minimal changes to existing Information Security, Data Governance or Compliance Controls

Few examples:

Pic 7 : Business Use Case Categorization Example

I will share briefly here about some of the use cases listed above.

  • JD (Job Description) Creation: This is a very simple use case for anyone in the organization requiring converting text based or list of key skill set required to automatically generate the JD, and further evolve it based on the required template to publish, critical set requirement for the project needs. Here is a simple Github repo example for someone to start with.
  • Candidate Profile Fitment: This could be very handy to evaluate candidate profiles (in different formats) for fitment against a specific JD. Another Github repo to play with.

In order to implement these micro use cases themselves, the team members can use with very less coding experience, very much like low code no code approach or using Co-pilot development feature that takes help from AI itself to implement. For example, here is my conversation with ChatGPT to help me look at recruitment business processes and took it’s help to develop a Python application — one that uses Python library — and another using OpenAPI.

You can visit my GitHub for more details on other business use cases. I plan to keep updating my Github for #crowdsourcing ideas and small POC implementation.

For instance, here is my brief interview :-) of ChatGPT on it’s understanding of SAFE Agile and Agile principles, and asked it to provide me an outline code for a virtual scrum master agent. [ VSM — ChatGPT ] [ VSM — Python Outline Code ]

When we categorize in this manner, different steps in business process within an application, it helps to find that developing an AI based use case. And it further boils down to creating multiple reusable AI components involving a lot of text/media content sourced directly from systems. Overtime, an AI based IT architecture could be developed that uses a plug and play approach.

In order to minimize certain factual risks associated with most AI, certain key factors can be considered:

  • Develop a transition period for switching between existing to AI based.
  • Use the transition period to train the data model with more data set.
  • Compare non-AI and AI based results and develop feedback loop to consider potential advise & support to end users

The number of use cases at micro level is thus very huge as each workflow or business process can be progressively experimented to use AI based approach. Imagination is the limit. There could be thousands (if not millions) of use cases that are possible. Thus, certain core foundational aspects using AI/ML specific to the organization will get shaped up as some of these get built, and that could lead to scaling up and replicating the same across different scenarios.

More importantly, developing such use-cases internally helps develop required skill-set and capabilities within the organization. It is not that AI will replace human being. As AI technology develops, there will be more and more skill set to be developed by human beings to help interface human systems facing human consumers with AI systems [ Prompt Engineering course by @AndrewNg’s ]. Added to that, human wisdom will be required to interpret the outcome and govern such systems. So as humans, we will need to learn and adapt the new technology for our own evolution! Some more references for business use cases:

[ @Nickolas Belesis’s post FinTech Use Cases ] [ @Stev Nouri’s on Microsoft AI and Open AI FinTech Use Case] [ @Senthil Nathan’s post on Manufacturing User Cases ] [You.com’s YouAgent ]

AI Technology as a Service (AITaaS)

I am not getting here into Artificial Intelligence as a Service (AIaaS) alone which is more into providing AI as part of cloud computing services, although they do fall under AI Service provider. In this section, I would like to focus on this section from the perspective of technology service providers (AITaaS) who are implementation partners / vendors for different IT requirements in an organization.

Like any other ISVs, some service providers have AI offerings could be products and solution frameworks that could be integrated to different aspects of business processes, and such companies can help their clients in customized solution, evolving and scaling up their own AI use cases. Some service providers could also provide AI integration services, and some offering ‘AI analyst’ to help train and refine LLM on data sets.

[ Tokenization, Model Architecture, Pre-training, Fine-tuning, Evaluation, and Iterative refinement ]

In my view, unlike traditional application development or ERP implementation AI Technology as a Service, will see a definite shift in mindset for both service providers and their clients.

This will pave way for a new paradigm model across — offerings by providers, AI Platform Architecture, and AI Development Lifecycle & related Project Management activities.

I will touch upon briefly here:

AI Platform Architecture

The AI use case market is currently fragmented and as more use cases get implemented in different business areas — and after couple of iterations of refining the training model , the platform architecture for GenerativeAI will evolve based on some of these factors:

  • Single LLM Architecture
  • Multi LLM Architecture
  • Hybrid LLM Architecture
  • Single Modal & Muti Modal Use Cases
  • LLM Data Training Model [ Link 1 from Databricks ] [ Link 2 by @doriandrost ]

I hope to expand on this at a later time with more details.

AI Implementation Lifecycle / Project Management

This may be an off topic here, but when technology implementation model changes, there needs to be a more adaptive and pragmatic approach to different phases of AI Development and how the activities around these will be managed. Unlike a traditional waterfall or agile or consultancy model of implementation, I believe we will see organizations getting into an exploratory stage & proof of concept phase, followed by ‘data staging’ phase with business operating in hybrid AI / non-AI mode. These phases might have an iterative approach (along with training the data model in-parallel) and will have unpredictable, inconsistent outcome that may take longer for refinement, due to real time data alignment. A carefully designed ‘support process’ for such AI systems will help get into a ‘stable phase’ which could become the new acceptable norm for the AI based business systems (despite any issues that might arise without plausible solution). One important aspect in implementing GenerativAI will be to develop a new way of QA/Testing approach. Google calls out for ‘Poka-Yoke quality testing principles. Most LLMs are in the same page currently in terms of a broader scope of filtering and testing of ‘potential misinformation’, ‘bias’, and ‘hallucinations’. Few references to keep thinking in this regard: [ Responsible AI Practices by Meta ] [ Responsible AI by Google ]Facebook / Meta’s LLaMA research brings this out succinctly.

As I mentioned in my earlier blog on AI Governance and alignment, it is important to keep these in mind, while strategizing on AI platform and quality engineering approach. OpenAI calls for alignment as well. Considering these factors, the project management approach to AI will need to be revised. My personal preference is to use PMI’s Disciplined Agile principles, by developing “Ways of Working (WoW)” upfront based on your current context and then adapt. As my mentor in this space @Daniel Gagnon puts it on his podcast Leading through Digital Chaos, “Without a unified vision, digital transformations can be more disruptive than beneficial.

The challenges for the AITaaS vendors in this space will be to translate these requirements into a financial model, and back it up with appropriate mutually agreeable legal bindings in their contractual documents. Possibly this is another AI use case itself! 😀

Thank you all for reading up to this point. As always looking forward to your feedback and suggestions.

Few References:

[ IBM Watsonx.AI ] [ Framer AI ] [ Teachable AI ] [ MLOps ] [ AutoML ] [ CreateML by Apple ] [ Generative BI by Akkio ] [ Magic Studio by Canva ][ Machine Learning Service — Amazon SageMaker ] [ @Bernard Marr on Low Code No Code AI ] [ Facebook Meta’s LLaMA Research and release ] [ Google’s Palm2 ] [ @Yann LeCun on Autonomous Intelligence ]

--

--