close

Artificial Intelligence

ArticlesArtificial IntelligenceFeatured

Move towards genApps or Generative AI Apps

Web Apps, Mobile Apps, and now Gen Apps. Gen Apps or Generative AI Applications are applications that can generate new content based on user input. The application can converse and generate text, images, code, videos, audio, and more from simple natural language prompts. The possibilities are endless!

Building Gen Apps requires a new set of integrated tools.  The recent announcement from Google Cloud talks about a Generative AI App Builder allowing developers to quickly ship new experiences, including bots, chat interfaces, custom search engines, digital assistants, and more.  More details at – https://cloud.google.com/blog/products/ai-machine-learning/generative-ai-for-businesses-and-governments

Looking forward to trying out Generative AI App Builder.  

read more
ArticlesArtificial IntelligenceFeatured

rEAL and generative AI BOOK SECOND EDITION AVAILABLE

Happy to share that the second edition of my book -“Real and Generative AI,” is now available digitally through Amazon. 

The second edition covers the latest buzz around Generative AI, ChatGPT, the current landscape and challenges, and what it would take for enterprises to adopt Generative AI Chatbots like ChatGPT.

The first edition was released 4+ years back. It was good to go back and validate the predictions that were made.

Is this current hype real, or have we just started scratching the surface of intelligence?  Read the book for more details.

Order your copy at – https://amzn.to/3yDq6zP

read more
ArticlesArtificial IntelligenceFeatured

Responsible AI, Ethical AI and CHATGPT

Responsible AI, in simple words, is about developing explainable, ethical, and unbiased models. For instance, Google has published its AI principles – https://ai.google/principles/, which discusses this subject in detail. Similarly, Microsoft has published its AI principles at https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/innovate/best-practices/trusted-ai. These key AI principles should be included as part of the design and development of large language models, as millions of users would view the output out of the AI systems. However, with ChatGPT, many instances fall shorts of these AI principles. 

  • Lack of Trust – Responses can sound plausible even if the output is false (Reference –  https://venturebeat.com/ai/the-hidden-danger-of-chatgpt-and-generative-ai-the-ai-beat/). You can’t rely on the output and need to verify it eventually.
  • Lack of Explainability on how the responses are derived. For instance, if the responses are created from multiple sources, list the source, and give attributions. There might be an inherent bias in the content and how this would be removed before training or filtered from the response. The response can be generated from multiple sources, and was there any priority source that was preferred to generate the response. Currently, ChatGPT doesn’t provide any explainability on the answers.
  • Ethical aspects – One of the examples is around code generation. As part of the generated code, there are no attributions to the original code, author, or license details. For instance, Open source has many licenses (https://opensource.org/licenses/); some might be restrictive. Also, were there any priority open-source repositories preferred during training (or filtering outputs) over others. Questions about the code’s security, vulnerability, and scalability must also be addressed. It is ultimately the accountability and responsibility of the developer to ensure that the code is reviewed, tested, secure, and follows their organization’s guidelines. All the above details should be transparent and addressed. For instance, if customers ask for Certification of Originality for their software application (or if there is a law in the future), this might be a challenge with AI-generated code unless the above is considered.
  • Fairness in responses – An excellent set of principles and an AI Fairness Checklist are outlined in the Microsoft paper –  https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE4t6dA. Given that ChatGPT is trained using internet content (a massive 570 GB of data sourced from Wikipedia, research articles, websites, books, etc.), it would be interesting to see how many items from the AI Fairness Checklist are followed. For instance, the content might be biased and have limitations. The human feedback used for data labeling might not represent the wider world (Reference – https://time.com/6247678/openai-chatgpt-kenya-workers/). These are some of the reported instances, but many might be undiscovered. Large-scale language models should be designed fairly and inclusively before being released. We should not say that the underlying content was biased; hence, the trained mode inherited the bias. We now have an opportunity to remove those biases from the content itself as we train the models.  
  • Confidentiality and privacy issues – As ChatGPT learns from interactions, there are no clear guidelines on what information would be used for training. If you interact with ChatGPT and end by sharing confidential data or a customer code for debugging, would that be used for training ?. Can we claim ChatGPT is GDPR compliant?

I have listed only some highlights above to raise awareness around this important topic.

We are at a junction where will see many AI advancements like ChatGPT, which millions of users would use. As these large-scale AI models are released, we must embed Responsible AI principles while releasing the models for a safer and trusted environment.

read more
ArticlesArtificial IntelligenceFeaturedViews & Opinions

ChatGPT and Generative AI Similarity syndrome

What is Similarity Syndrome?

Similarity Syndrome is a perception that might be created by large language models like ChatGPT, where any unique work would be ultimately treated as common knowledge or work similarly to others.

For instance,

  • You write a unique quote for your book, and ChaTGPT may end up saying it’s common knowledge and similar to many other quotes instead of attributing it to you.
  • You create an algorithm, and Codex uses your code with other algorithms to create something similar. It will reuse your code and sell it back to you.
  • You create digital art and unique images, and DALL·E 2 uses your image and other images to create similar images. It will sell this as an image library, and you buy it without releasing your image contributed to it.

In general, any piece of content can be used to create something similar. There is no explainability and attribution to the original authors/content creators of how the dynamic response was created. There should be strict laws for copyrights and accountability in AI.

I did an experiment to ask ChatGPT about the quote I wrote in my book. Given below is the interaction

Many websites, through google search, attributed the quote to me (like https://www.goodreads.com/author/quotes/49633.Naveen_Balani).

This is not about quotes but probably any generated content, be it digital arts, music, software code, or marketing content. AI would use your data, generate something similar, not attribute it to you, call it common knowledge, and even sell it to you. We will end up living in an AI world where everything is similar : )

To conclude, AI models should be transparent, explainable, auditable, ethical, and, most importantly, credit the original work created by the authors.

read more
ArticlesArtificial IntelligenceFeaturedViews & Opinions

ChatGPT Even Predicts the Future

ChatGPT seems good at converting facts to fiction. I asked ChatGPT about myself; around 70% of the information is cooked up (see image below).  On a lighter note, maybe it’s predicting the future, which I am unaware of : ). But what about past details? I need a time machine to go back and change it. Hopefully, I will write a post on the Time machine someday.

With Google search, all the correct details and my website shows up. 

With ChatGPT, I expected that at least this information should be correct as its readily available. The sources can be from Linkedin, my website, or other credible sources. There is no logic and no complex deep learning network to apply. The biggest problem with the below response is that there is no explainability and no details on the sources that were used to construct the response. How one can verify the correctness of the response ?. The responses are created dynamically even when not required. Unless you design with explainability in mind, the issues around trust and transparency will not be resolved. There are other issues around bias, ethics, and confidentiality, which I will talk about in future posts.

Unless you know the right answer, it would be difficult to know from the above response as the answers are grammatically right and may sound right. This is a very simple scenario. Moreover, moving from general intelligence to specific verticals like healthcare (for instance, diagnosis), increases the complexity of large language models and poses significant challenges. I have discussed this in one of my previous blogs, discussing the Lack of Domain intelligence for ChatGPT (and other conversational engines).  

I can’t predict the future, but I believe Google Search might be better positioned to take Generative AI and integrate it with its search engine. Hopefully, they follow their 7 AI principles – https://ai.google/principles/ before the release to make AI applications transparent, explainable, and ethical. “Slow but steady wins the race” might come true for Google.

With the AI-powered Bing search engine, some early feedback and interesting facts about long chat sessions going wrong are documented on their website – https://blogs.bing.com/search/february-2023/The-new-Bing-Edge-%E2%80%93-Learning-from-our-first-week.

In my next post, I will talk about an interesting subject -“Similarity syndrome in ChatGPT

read more
ArticlesArtificial IntelligenceFeatured

ChatGPT for Enterprise Adoption

To make ChatPT relevant for enterprise adoption, we need

Domain Adaptability
Domain Intelligence
Explainability 
Transparency
Non-biased
Privacy
Scalability – Compute power for Training and Inference
Lower Environmental Footprint

ChatGPT has a long way to go for enterprise adoption.. What do you think? Where are ChatPT and other large language models in enterprise adoption?

read more
ArticlesArtificial IntelligenceDeep LearningFeaturedViews & Opinions

From Watson to ChatGPT: AI Chatbots and Limitations

The release of ChatGPT and the responses it provided brought back the Conversation AI to the forefront and made Conversation AI available to everyone through a simple web interface. We saw many creative ways to use ChatGPT and how it might impact the future and questions around whether it will replace the Google Search engine and Jobs.

Well, let’s address this question with the below analysis –

From early Watson systems to ChatGPT, a fundamental issue still remains with Conversational AI.

Lack of Domain Intelligence.

While ChatGPT definitely advances in the field of Conversational AI, I like to call out the following from my book- Real AI: Chatbots (published in 2019)

AI can learn but can’t think“.

Thinking would always be left to humans on how to use the output of an AI system. AI systems and their knowledge will always be boxed to what it has learned but can never be generalized (like humans) where domain expertise and intelligence are required.

What is an example of Domain Intelligence?

Take a simple example where you ask the Conversational AI agent to “Suggest outfits for Shorts and Saree”.

Fundamentally, any skilled person would treat them as two different options – matching outfits with Shorts and matching outfits with Saree OR asking clarifying questions, OR suggesting these options are disjoint and can’t be combined.

But with ChatGPT (or any general-purpose Conversational AI), the response was as shown below. Clearly, without understanding the domain and context, trying to fill in some responses. This is a very simple example, but the complexity grows exponentially, for deep expertise and correlation are required -like a doctor recommending options for treatment. This is the precise reason why we saw many failures when AI agents were used in solving Health problems. They tried to train general-purpose AI rather than building domain-expert AI systems.

The other issue with this Generative Dialog AI system is –

Explainability – Making the AI output explainable on how it arrived. I have described this in my earlier blog – Responsible And Ethical AI

Trust and Recommendation Bias – Rght recommendation and adaptability. I have explained this in my earlier blog. – https://navveenbalani.dev/index.php/views-opinions/its-time-to-reset-the-ai-application/

For more details, I have explained this concept in my short  ebook – Real AI: Chatbots (2019) – https://amzn.to/3CmoexC

You can find the book online on my website – https://cloudsolutions.academy/how-to/ai-chatbots/ or enroll for a free video course at https://learn.cloudsolutions.academy/courses/ai-chatbots-and-limitations/

The intent of this blog was to bring awareness on ChatGPT and its current limitations. Any Technology usually has a set of limitations, and understanding these limitations will help you design and develop solutions keeping these limitations in mind.

ChatGPT definitely advances Conversation AI, and a lot of time and effort would have gone into building this. Kudos to the team behind this. Will be interesting to see how future versions of ChatGPT can address the above limitations.

In my view, ChatGPT and other AI chatbots to follow will be similar to any other tool to assist you with the required information, and you will use your thinking and intelligence to get work done.

So, sit back and relax; the current version of ChatGPT will not replace anything which requires thinking and expertise !!.

On a lighter note, this blog is not written by ChatGPT 🙂

read more
ArticlesArtificial IntelligenceFeatured

Responsible and Ethical AI – Building Explainable models

Ethical AI is simple words is about ensuring your AI models are fair, ethical and unbiased.

So how does bias gets into the model. Let’s assume, you are building an AI model which provides salary suggestions for new hires. As part of the building the model, you have taken gender as one of the features and you are using the feature to suggest salary. The model is trying to discriminate salary based on genders. In the past, this bias has been going through human judgements and various social and economical factors ( https://en.wikipedia.org/wiki/Gender_pay_gap) but if you include this bias as part of the new model, this is a recipe for disaster. The whole idea is to build a model which is not biased and suggest salary based on people experience and merits.

Take another example of an application providing restaurant recommendations to a user and allowing a user to book a table. Now, while recommending  new restaurants for the user, the AI application is designed to look at the amount spent in previous transactions and rating of the restaurant (along with other features) and the AI system start recommending restaurants which are costlier. Even though, there might be good restaurants is the vicinity and less costly, the restaurants may not show up as one of the top recommendations. Also more the amount spent by the user, implies more revenues for the restaurant application. So in short, you are steering a class of user towards spending more on high-end restaurants, without the user knowing about it. Does this classify as a bias or a smart revenue generating scheme ?

Ethical AI is great topic for research and debate as you would see a lot of development (as well as usual marketing buzzwords) and governance in this area. 

So how do you ensure your model in Ethical and validate it ?. I am sharing my perspective below –

– Designing the model without bias – Ensure you don’t include the features that can make your model bias. For instance, don’t include gender while predicting the salary packages. Take time to validate the data sources and features being used to build the model.

– Explain the model output  – Designing applications with explainability in mind should be a key design principle. If the user receives an output from an AI algorithm, providing why an output was presented and how relevant it is, should be built into the algorithm. This should empower users to understand why a particular information is being presented and turn on/off any preferences associated with an AI algorithm for future recommendations/suggestions.

– Validate the model – Validate the model with enough test cases. You will also see a lot of offerings (the Ethical AI services) crop up in future around this area. Again, the key is that offering/services needs to be Vertical focus (which understand the domain), rather than pure play horizontal AI services. (else it would end up like chatbots hype – https://navveenbalani.dev/index.php/articles/ai-chatbots-reality-vs-hype/

– Accountability –  Ultimately humans needs to be look at the output from the AI system and take corrective action for critical task. I don’t see machine taking over human intelligence for critical tasks in future. For instance, a cancer treatment option thrown by an AI system needs to be carefully investigated by the doctor, but a fashion website recommending wrong products for a user is not critical and can be corrected later through feedback/learnings.

Going back to the restaurant application, if we design the application with the above guidelines in mind and make the output explainable to the user, we can at minimum have 4 levels of recommendations (shown as tiles in application),  along with an evidence on why a recommendation is being provided – 

  1. Recommending restaurants based on earlier restaurant spends ,ratings, history and preferences of the user
  2. Recommending similar restaurants which are highly rated and less costly based on ratings, history and preferences of the user
  3. Recommending new restaurants based on user history and preferences of the user
  4. Recommendations generated by the system without applying any user preferences.

The revised application now provides various recommendations and enough evidences to back up the recommendation and ultimately the choice is left to the user to pick up the restaurant and book a table.

The above was an example of a very simple application, but imagine when AI is deployed across industries and in government agencies, then developing and monitoring the AI system for ethical principles would be extremely critical. Both the creators of the model, as well as validators (agencies/third party systems etc) validating the model would be critical to ensure AI models are fair, ethical and unbiased. 

As we are creators and validator of the AI system, the onus lies on us (humans) to ensure technology is used for good.

read more
Artificial IntelligenceFeaturedViews & Opinions

It’s time to Reset the AI application?

Do you think AI is changing your thinking ability?  From applications recommending what movies to watch, what songs to listen, what to buy, what to eat, what ads you see, and the list goes on… all are driven by applications learning from you or delivering information through collective intelligence (i.e., people like you, location based etc.).

But are you sure the right recommendation is being provided to you or are you consuming the information as-is and adapting to it? Have you ever thought, would you have reached the same conclusion by applying your research and mental knowledge?

To add on, with information being readily available, less time and mental ability is spent on problem solving and more effort is spent on searching the solutions online.

As we build more smarter applications in future, which keeps learning everything about you, do you think this would change our thinking patterns even further?

Apart from AI systems trying to learn, there can be other ethical issues around trust and bias and how do you design and validate systems that provide recommendations which can be consumed by humans to provide unbiased decisions.  I have covered this, as part of my earlier article – https://navveenbalani.dev/index.php/articles/responsible-and-ethical-ai-building-explainable-models/

As we are creators and validators of the AI system, the onus lies on us (humans) to ensure any technology is used for good.

As standards and compliance are still evolving in AI world, we should start designing systems that should let user decide how to use the application and when to reset it.

I am suggesting few approaches below to drive discussions in this area, which needs contribution from everyone to help deliver smart and transparent AI applications in future.

The Uber Persona Model

All applications build some kind of semantic user profiles incrementally to understand more about the user and provide recommendations. Making this transparent to the user should be first step.

Your application can have various semantic user profiles – one about you, one about your community (similar to you, location based etc..) and how this has been derived over a period of time. Finally your application should have a Reset Profile, that lets you reset your profile or  a “Private AI” profile that enables you to use the application without knowing anything about you and let you discover the required information. Leaving the choice to the end-users on which profile to use, should lead to better control and transparency and making users build trust in the system.

Explainability and Auditability

Designing applications with explainability in mind should be a key design principle. If the user receives an output from an AI algorithm, providing information as to why this output was presented and how relevant it is, should be built into the algorithm. This would empower users to understand why a particular information is being presented and turn on/off any preferences associated with an AI algorithm for future recommendations/suggestions.

For instance, take the example of server auditing, where you have tools that log every request and response, track changes in the environment, assess access controls and risk and provide end-to-end transparency.  

Same level of auditing is required when AI delivers an output – what was the input, what version of model was used, what features were evaluated, what data was used for evaluation, what was the confidence score, what was the threshold, what output was delivered and what was the feedback.

Gamifying the Knowledge Discovery

As information is readily available, how do you make it consumable in a way where you can nudge users to use their mental ability to find solutions, rather than giving all the information in one go. This would be particularly useful on how education in general (especially for schools/universities) , would be delivered to everyone in future.  

How about a google like smart search engine, which delivers information that lets you test your skills. As mentioned earlier, in the Uber Persona Model section, the choice is up to the user to switch on/off this recommendation.

I hope this article, gave you enough insights on this important area.

To conclude , I would say the only difference between AI and we all in future, would be our ability to think wisely and build the future we want.  

read more
ArticlesArtificial IntelligenceCognitive ComputingConferencesFeaturedViews & Opinions

The chatbot hype failed to live up

AI chatbots give a perception of being intelligent, but intelligence is a long way away, says Navveen Balani.

Read my article on why first generation of chatbots did not live up to the hype. The article was featured in August edition of https://www.industrialautomationindia.in/ magazine

Here is the link to the content from the magazine – http://navveenbalani.dev/wp-content/uploads/2020/08/navveen-magazine.pdf

read more
1 2 3 4
Page 1 of 4