close

Artificial Intelligence

ArticlesFeaturedGenerative AI

Ethical Prompt Engineering: A Pathway to Responsible AI Usage

Artificial intelligence (AI) is transforming our world at an unprecedented pace. As AI becomes more ingrained in our daily lives, concerns about bias and fairness in AI models continue to grow. In response to these issues, the field of ethical prompt engineering has emerged as a vital tool in ensuring AI applications are transparent, fair, and trustworthy. This blog post will explore ethical prompt engineering, discussing its role in mitigating AI bias and providing real-world examples to showcase its importance.

Ethical Prompt Engineering: The Basics

Ethical prompt engineering is the process of crafting input queries or prompts for AI models in a way that minimizes biases and promotes fairness. This method acknowledges that AI models may inherently have biases due to the data they were trained on, but it aims to mitigate those biases by carefully designing the questions asked of the AI. Essentially, ethical prompt engineering helps to ensure that AI output aligns with human values and moral principles.

The Importance of Ethical Prompt Engineering

AI models have the potential to perpetuate harmful biases if their responses are not carefully examined and managed. Real-world examples of AI bias include the unfair treatment of individuals in facial recognition systems, biased hiring algorithms, and skewed newsfeed content. Ethical prompt engineering can be an effective way to address these issues and ensure that AI systems are developed and deployed responsibly.

Real-World Examples of AI Bias

  1. Insurance quotes: AI models used in the insurance industry may inadvertently provide discriminatory quotes based on factors such as age, gender, or race. These biases can result in unfair pricing and reduced access to insurance coverage for certain groups.
  2. Job recruitment: AI-powered recruitment tools may generate biased candidate shortlists by disproportionately favoring individuals based on factors such as gender, ethnicity, or educational background, rather than purely considering their skills, experience, and qualifications.
  3. Newsfeed content: AI algorithms used to curate personalized newsfeeds can contribute to the creation of echo chambers by prioritizing content that reinforces users’ existing beliefs and biases, thereby limiting exposure to diverse perspectives.
  4. Customer service: AI chatbots and virtual assistants may inadvertently treat customers differently based on their names, speech patterns, or other factors, leading to unequal service experiences for certain groups.
  5. Loan approvals: AI models used in credit scoring and loan decision-making may discriminate against minority borrowers due to historical biases in the data used to train these models, resulting in unfair lending practices.

Various Approaches to Ethical AI Development

Several approaches can be employed to ensure fairness and minimize bias in AI models:

  1. Data collection: Ensuring diverse and representative data sets are used during the training process can help reduce biases. By collecting data from various sources and demographics, AI models can learn to be more inclusive and fair.
  2. Training with different perspectives: Encouraging interdisciplinary collaboration during AI development can provide valuable insights to identify and address potential biases. By including experts from different fields, AI models can benefit from a broader understanding of potential issues and ethical concerns.
  3. Regular audits and evaluations: Continuously assessing AI models for biases and ethical concerns can help identify issues early on. By conducting regular evaluations and adapting the models accordingly, developers can work to reduce biases in AI applications.

Ethical Prompt Engineering in Practice

Assuming an AI model has ethical biases, prompt engineering can still be utilized to minimize the impact of these biases. By carefully crafting prompts that guide the AI model to provide responses that align with ethical guidelines, developers can ensure that AI systems are more responsible and unbiased. Following are some of the examples of ethical prompts

  1. AI recruitment tool: Instead of asking the AI model to filter candidates based on the applicants’ names, an ethical prompt could be, “Please rank the candidates based on their relevant skills, experience, and qualifications for the job.”
  2. AI insurance quoting system: Rather than allowing the AI model to consider factors such as age, gender, or race, an ethical prompt could be, “Please provide an insurance quote based on the applicant’s driving history, location, and vehicle type.”
  3. AI newsfeed curation: To avoid creating echo chambers, an ethical prompt could be, “Please recommend a balanced selection of articles that provide diverse perspectives on the topic.”

By using these and similar ethical prompts, developers can create AI applications that are more aligned with societal needs and expectations.

Introducing “Prompt Engineering: Unlocking Generative AI: Ethical Creative AI for All”

If you are interested in learning more about designing and implementing ethical prompts, consider exploring my book, “Prompt Engineering: Unlocking Generative AI: Ethical Creative AI for All.” This comprehensive resource delves into the principles and practices of ethical prompt engineering, providing readers with practical guidance on how to develop and deploy AI systems that are both innovative and responsible.

In conclusion, ethical prompt engineering is a critical component of responsible AI development. By carefully crafting the questions we ask AI systems, we can create more fair, transparent, and ethical AI applications. As the field of ethical prompt engineering continues to evolve, it’s essential for AI practitioners, researchers, and users to prioritize ethical considerations and work together to harness the power of AI responsibly.

Direct Link to the Book – https://amzn.to/3UWuYu5

read more
ArticlesArtificial IntelligenceBooksFeaturedGenerative AI

Prompt Engineering: Unlocking Generative AI: Ethical Creative AI for All

In recent years, artificial intelligence (AI) and machine learning have transformed countless industries, revolutionizing how we work, learn, and communicate. One of the most significant advances in this field has been the development of large-scale language models (LLMs), such as OpenAI GPT-4 and Google Bard, capable of understanding and generating human-like text. The potential applications of these models are vast, from writing assistance and content generation to information retrieval and natural language interfaces. Generative AI, a subset of AI that focuses on creating new content or data, has emerged as a key player in this landscape.

As the capabilities of language models have grown, so too has the importance of understanding how to effectively communicate with them. Enter the field of prompt engineering—the art and science of crafting the perfect input to achieve the desired output from a language model. This book, ‘Prompt Engineering: Unlocking Generative AI,’ is designed to provide a comprehensive yet accessible guide to the fascinating and rapidly evolving disciplines of generative AI and prompt engineering.

Whether you’re an AI enthusiast, a software developer, a content creator, or simply someone interested in harnessing the power of AI for personal or professional use, this book aims to equip you with the knowledge and tools you need to become a proficient, prompt engineer.
Through clear explanations, practical examples, and use cases, you’ll learn the foundations of language models, the principles of effective, prompt design, and the techniques and strategies that will enable you to unlock the full potential of these remarkable AI systems.

Along the way, we’ll also delve into the ethical considerations surrounding prompt engineering, examining issues such as bias, fairness, privacy, and security. As AI continues to reshape the world around us, we must use this technology responsibly and thoughtfully, and this book aims to empower you to do just that.

Finally, we’ll explore the future of prompt engineering and the exciting opportunities and challenges that lie ahead. The field is still in its infancy, with much to discover and invent. By the time you finish reading this book, you’ll be well-equipped to contribute to this dynamic and rapidly growing area of AI research and application. Together, let us embark on this journey to unlock the true power of AI language models and transform how we communicate with technology.

We hope that “Prompt Engineering: Unlocking Generative AI” will serve as a valuable resource and a source of inspiration as you harness the power of AI to achieve your goals and shape the future. Happy prompting!

Click here to buy the book


Here is a captivating TOC for the book – 
Chapter 1. Introduction to Prompt Engineering

  • Emergence of Generative AI and AI Creativity
  • What is Prompt Engineering
  • From Programming to Prompting: A Paradigm Shift
  • How is Prompt Engineering different from Search
  • Skills Required for Prompt Engineering
  • Key Concepts and Terminology
  • The Importance of Prompt Engineering
  • Your first hello world creative prompt
  • Summary

Chapter 2. Foundations of Language Models

  • What are Language Models?
  • Types of Language Models
  • Evolution of GPT and Technology Advancements
  • How Language Models like GPT-4 Work
  • Limitations of Language Models
  • Summary

Chapter 3. Art and Science of Prompt Engineering

  • The Process for crafting effective prompts
  • Developing a Clear Objective and Goals
  • Crafting Clear Objectives and Goals in Action
  • Design Principles for Effective Prompts
  • Enhancing Prompt Design: From Poor to Better Prompts in Action
  • Eliciting Creativity and Originality
  • Eliciting Creativity and Originality in Action
  • Prompt Optimization
  • Techniques for Prompt Optimization in Action
  • Testing, Monitoring, and Evaluation
  • Techniques and Strategy for Testing, Monitoring, and Evaluation
  • Crafting End-to-End Prompt Solutions: Goal, Design, Innovate, Optimize, and Testing
  • Summary

Chapter 4. Crafting Prompt Types

  • Understanding Prompt Types
  • Cross-Functional Prompt Types
  • 25+ Ingenious Cross-Functional Starter Prompts for Every Occasion
  • 30+ Industry-Specific Prompt Types
  • Summary

Chapter 5. Advanced Prompt Engineering

  • Chaining Prompts for Multi-Step Tasks
  • Iterative Prompting for Ambiguity Resolution
  • Context Manipulation Strategies
  • Dynamic and Conditional Prompts
  • Adversarial Prompts for Model Robustness
  • Mitigating Prompt Bias and Improving Fairness
  • Limitations And Pitfalls
  • Addressing Limitations and Potential Pitfalls
  • Summary

Chapter 6. Ethical Considerations in Prompt Engineering

  • Ethical Concerns in AI Creativity and Prompt Engineering
  • Ethical Principles and Best Practices for Prompt Engineering
  • Ethical Prompts in Action
  • Case Studies: Ethical Prompt Engineering in Practice
  • Industry Initiatives and Regulatory Frameworks
  • Future Directions and Challenges
  • Summary

Chapter 7. Use Cases for Real-World Prompt Engineering

  • Launch of Global Credit Card
  • The Perfect Interview
  • Future of Mobility
  • Social Media Optimization
  • Future of Work
  • Designing a Future-Ready Autonomous Vehicle
  • The Next BlockBuster Movie
  • New Clothing Line for Corporate Work from Home
  • Enhancing Employee Engagement in Workplaces
  • Reimagining Risk Management
  • Metaverse-Ready Shopping Experience
  • Smart Cities and Sustainable Infrastructure
  • Manufacturing Excellence: Supply Chain Optimization
  • Software Architecture Decisions and Code Generation
  • Iterative Personalized Family Travel Itinerary Creation
  • Summary

Chapter 8. The Future of Prompt Engineering

  • A Multi-Modal, Interconnected, and Ethical AI Landscape
  • Summary
read more
ArticlesArtificial IntelligenceFeatured

Move towards genApps or Generative AI Apps

Web Apps, Mobile Apps, and now Gen Apps. Gen Apps or Generative AI Applications are applications that can generate new content based on user input. The application can converse and generate text, images, code, videos, audio, and more from simple natural language prompts. The possibilities are endless!

Building Gen Apps requires a new set of integrated tools.  The recent announcement from Google Cloud talks about a Generative AI App Builder allowing developers to quickly ship new experiences, including bots, chat interfaces, custom search engines, digital assistants, and more.  More details at – https://cloud.google.com/blog/products/ai-machine-learning/generative-ai-for-businesses-and-governments

Looking forward to trying out Generative AI App Builder.  

read more
ArticlesArtificial IntelligenceFeatured

rEAL and generative AI BOOK SECOND EDITION AVAILABLE

Happy to share that the second edition of my book -“Real and Generative AI,” is now available digitally through Amazon. 

The second edition covers the latest buzz around Generative AI, ChatGPT, the current landscape and challenges, and what it would take for enterprises to adopt Generative AI Chatbots like ChatGPT.

The first edition was released 4+ years back. It was good to go back and validate the predictions that were made.

Is this current hype real, or have we just started scratching the surface of intelligence?  Read the book for more details.

Order your copy at – https://amzn.to/3yDq6zP

read more
ArticlesArtificial IntelligenceFeatured

Responsible AI, Ethical AI and CHATGPT

Responsible AI, in simple words, is about developing explainable, ethical, and unbiased models. For instance, Google has published its AI principles – https://ai.google/principles/, which discusses this subject in detail. Similarly, Microsoft has published its AI principles at https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/innovate/best-practices/trusted-ai. These key AI principles should be included as part of the design and development of large language models, as millions of users would view the output out of the AI systems. However, with ChatGPT, many instances fall shorts of these AI principles. 

  • Lack of Trust – Responses can sound plausible even if the output is false (Reference –  https://venturebeat.com/ai/the-hidden-danger-of-chatgpt-and-generative-ai-the-ai-beat/). You can’t rely on the output and need to verify it eventually.
  • Lack of Explainability on how the responses are derived. For instance, if the responses are created from multiple sources, list the source, and give attributions. There might be an inherent bias in the content and how this would be removed before training or filtered from the response. The response can be generated from multiple sources, and was there any priority source that was preferred to generate the response. Currently, ChatGPT doesn’t provide any explainability on the answers.
  • Ethical aspects – One of the examples is around code generation. As part of the generated code, there are no attributions to the original code, author, or license details. For instance, Open source has many licenses (https://opensource.org/licenses/); some might be restrictive. Also, were there any priority open-source repositories preferred during training (or filtering outputs) over others. Questions about the code’s security, vulnerability, and scalability must also be addressed. It is ultimately the accountability and responsibility of the developer to ensure that the code is reviewed, tested, secure, and follows their organization’s guidelines. All the above details should be transparent and addressed. For instance, if customers ask for Certification of Originality for their software application (or if there is a law in the future), this might be a challenge with AI-generated code unless the above is considered.
  • Fairness in responses – An excellent set of principles and an AI Fairness Checklist are outlined in the Microsoft paper –  https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE4t6dA. Given that ChatGPT is trained using internet content (a massive 570 GB of data sourced from Wikipedia, research articles, websites, books, etc.), it would be interesting to see how many items from the AI Fairness Checklist are followed. For instance, the content might be biased and have limitations. The human feedback used for data labeling might not represent the wider world (Reference – https://time.com/6247678/openai-chatgpt-kenya-workers/). These are some of the reported instances, but many might be undiscovered. Large-scale language models should be designed fairly and inclusively before being released. We should not say that the underlying content was biased; hence, the trained mode inherited the bias. We now have an opportunity to remove those biases from the content itself as we train the models.  
  • Confidentiality and privacy issues – As ChatGPT learns from interactions, there are no clear guidelines on what information would be used for training. If you interact with ChatGPT and end by sharing confidential data or a customer code for debugging, would that be used for training ?. Can we claim ChatGPT is GDPR compliant?

I have listed only some highlights above to raise awareness around this important topic.

We are at a junction where will see many AI advancements like ChatGPT, which millions of users would use. As these large-scale AI models are released, we must embed Responsible AI principles while releasing the models for a safer and trusted environment.

read more
ArticlesArtificial IntelligenceFeaturedViews & Opinions

ChatGPT and Generative AI Similarity syndrome

What is Similarity Syndrome?

Similarity Syndrome is a perception that might be created by large language models like ChatGPT, where any unique work would be ultimately treated as common knowledge or work similarly to others.

For instance,

  • You write a unique quote for your book, and ChaTGPT may end up saying it’s common knowledge and similar to many other quotes instead of attributing it to you.
  • You create an algorithm, and Codex uses your code with other algorithms to create something similar. It will reuse your code and sell it back to you.
  • You create digital art and unique images, and DALL·E 2 uses your image and other images to create similar images. It will sell this as an image library, and you buy it without releasing your image contributed to it.

In general, any piece of content can be used to create something similar. There is no explainability and attribution to the original authors/content creators of how the dynamic response was created. There should be strict laws for copyrights and accountability in AI.

I did an experiment to ask ChatGPT about the quote I wrote in my book. Given below is the interaction

Many websites, through google search, attributed the quote to me (like https://www.goodreads.com/author/quotes/49633.Naveen_Balani).

This is not about quotes but probably any generated content, be it digital arts, music, software code, or marketing content. AI would use your data, generate something similar, not attribute it to you, call it common knowledge, and even sell it to you. We will end up living in an AI world where everything is similar : )

To conclude, AI models should be transparent, explainable, auditable, ethical, and, most importantly, credit the original work created by the authors.

read more
ArticlesArtificial IntelligenceFeaturedViews & Opinions

ChatGPT Even Predicts the Future

ChatGPT seems good at converting facts to fiction. I asked ChatGPT about myself; around 70% of the information is cooked up (see image below).  On a lighter note, maybe it’s predicting the future, which I am unaware of : ). But what about past details? I need a time machine to go back and change it. Hopefully, I will write a post on the Time machine someday.

With Google search, all the correct details and my website shows up. 

With ChatGPT, I expected that at least this information should be correct as its readily available. The sources can be from Linkedin, my website, or other credible sources. There is no logic and no complex deep learning network to apply. The biggest problem with the below response is that there is no explainability and no details on the sources that were used to construct the response. How one can verify the correctness of the response ?. The responses are created dynamically even when not required. Unless you design with explainability in mind, the issues around trust and transparency will not be resolved. There are other issues around bias, ethics, and confidentiality, which I will talk about in future posts.

Unless you know the right answer, it would be difficult to know from the above response as the answers are grammatically right and may sound right. This is a very simple scenario. Moreover, moving from general intelligence to specific verticals like healthcare (for instance, diagnosis), increases the complexity of large language models and poses significant challenges. I have discussed this in one of my previous blogs, discussing the Lack of Domain intelligence for ChatGPT (and other conversational engines).  

I can’t predict the future, but I believe Google Search might be better positioned to take Generative AI and integrate it with its search engine. Hopefully, they follow their 7 AI principles – https://ai.google/principles/ before the release to make AI applications transparent, explainable, and ethical. “Slow but steady wins the race” might come true for Google.

With the AI-powered Bing search engine, some early feedback and interesting facts about long chat sessions going wrong are documented on their website – https://blogs.bing.com/search/february-2023/The-new-Bing-Edge-%E2%80%93-Learning-from-our-first-week.

In my next post, I will talk about an interesting subject -“Similarity syndrome in ChatGPT

read more
ArticlesArtificial IntelligenceFeatured

ChatGPT for Enterprise Adoption

To make ChatPT relevant for enterprise adoption, we need

Domain Adaptability
Domain Intelligence
Explainability 
Transparency
Non-biased
Privacy
Scalability – Compute power for Training and Inference
Lower Environmental Footprint

ChatGPT has a long way to go for enterprise adoption.. What do you think? Where are ChatPT and other large language models in enterprise adoption?

read more
ArticlesArtificial IntelligenceDeep LearningFeaturedViews & Opinions

From Watson to ChatGPT: AI Chatbots and Limitations

The release of ChatGPT and the responses it provided brought back the Conversation AI to the forefront and made Conversation AI available to everyone through a simple web interface. We saw many creative ways to use ChatGPT and how it might impact the future and questions around whether it will replace the Google Search engine and Jobs.

Well, let’s address this question with the below analysis –

From early Watson systems to ChatGPT, a fundamental issue still remains with Conversational AI.

Lack of Domain Intelligence.

While ChatGPT definitely advances in the field of Conversational AI, I like to call out the following from my book- Real AI: Chatbots (published in 2019)

AI can learn but can’t think“.

Thinking would always be left to humans on how to use the output of an AI system. AI systems and their knowledge will always be boxed to what it has learned but can never be generalized (like humans) where domain expertise and intelligence are required.

What is an example of Domain Intelligence?

Take a simple example where you ask the Conversational AI agent to “Suggest outfits for Shorts and Saree”.

Fundamentally, any skilled person would treat them as two different options – matching outfits with Shorts and matching outfits with Saree OR asking clarifying questions, OR suggesting these options are disjoint and can’t be combined.

But with ChatGPT (or any general-purpose Conversational AI), the response was as shown below. Clearly, without understanding the domain and context, trying to fill in some responses. This is a very simple example, but the complexity grows exponentially, for deep expertise and correlation are required -like a doctor recommending options for treatment. This is the precise reason why we saw many failures when AI agents were used in solving Health problems. They tried to train general-purpose AI rather than building domain-expert AI systems.

The other issue with this Generative Dialog AI system is –

Explainability – Making the AI output explainable on how it arrived. I have described this in my earlier blog – Responsible And Ethical AI

Trust and Recommendation Bias – Rght recommendation and adaptability. I have explained this in my earlier blog. – https://navveenbalani.dev/index.php/views-opinions/its-time-to-reset-the-ai-application/

For more details, I have explained this concept in my short  ebook – Real AI: Chatbots (2019) – https://amzn.to/3CmoexC

You can find the book online on my website – https://cloudsolutions.academy/how-to/ai-chatbots/ or enroll for a free video course at https://learn.cloudsolutions.academy/courses/ai-chatbots-and-limitations/

The intent of this blog was to bring awareness on ChatGPT and its current limitations. Any Technology usually has a set of limitations, and understanding these limitations will help you design and develop solutions keeping these limitations in mind.

ChatGPT definitely advances Conversation AI, and a lot of time and effort would have gone into building this. Kudos to the team behind this. Will be interesting to see how future versions of ChatGPT can address the above limitations.

In my view, ChatGPT and other AI chatbots to follow will be similar to any other tool to assist you with the required information, and you will use your thinking and intelligence to get work done.

So, sit back and relax; the current version of ChatGPT will not replace anything which requires thinking and expertise !!.

On a lighter note, this blog is not written by ChatGPT 🙂

read more
ArticlesArtificial IntelligenceFeatured

Responsible and Ethical AI – Building Explainable models

Ethical AI is simple words is about ensuring your AI models are fair, ethical and unbiased.

So how does bias gets into the model. Let’s assume, you are building an AI model which provides salary suggestions for new hires. As part of the building the model, you have taken gender as one of the features and you are using the feature to suggest salary. The model is trying to discriminate salary based on genders. In the past, this bias has been going through human judgements and various social and economical factors ( https://en.wikipedia.org/wiki/Gender_pay_gap) but if you include this bias as part of the new model, this is a recipe for disaster. The whole idea is to build a model which is not biased and suggest salary based on people experience and merits.

Take another example of an application providing restaurant recommendations to a user and allowing a user to book a table. Now, while recommending  new restaurants for the user, the AI application is designed to look at the amount spent in previous transactions and rating of the restaurant (along with other features) and the AI system start recommending restaurants which are costlier. Even though, there might be good restaurants is the vicinity and less costly, the restaurants may not show up as one of the top recommendations. Also more the amount spent by the user, implies more revenues for the restaurant application. So in short, you are steering a class of user towards spending more on high-end restaurants, without the user knowing about it. Does this classify as a bias or a smart revenue generating scheme ?

Ethical AI is great topic for research and debate as you would see a lot of development (as well as usual marketing buzzwords) and governance in this area. 

So how do you ensure your model in Ethical and validate it ?. I am sharing my perspective below –

– Designing the model without bias – Ensure you don’t include the features that can make your model bias. For instance, don’t include gender while predicting the salary packages. Take time to validate the data sources and features being used to build the model.

– Explain the model output  – Designing applications with explainability in mind should be a key design principle. If the user receives an output from an AI algorithm, providing why an output was presented and how relevant it is, should be built into the algorithm. This should empower users to understand why a particular information is being presented and turn on/off any preferences associated with an AI algorithm for future recommendations/suggestions.

– Validate the model – Validate the model with enough test cases. You will also see a lot of offerings (the Ethical AI services) crop up in future around this area. Again, the key is that offering/services needs to be Vertical focus (which understand the domain), rather than pure play horizontal AI services. (else it would end up like chatbots hype – https://navveenbalani.dev/index.php/articles/ai-chatbots-reality-vs-hype/

– Accountability –  Ultimately humans needs to be look at the output from the AI system and take corrective action for critical task. I don’t see machine taking over human intelligence for critical tasks in future. For instance, a cancer treatment option thrown by an AI system needs to be carefully investigated by the doctor, but a fashion website recommending wrong products for a user is not critical and can be corrected later through feedback/learnings.

Going back to the restaurant application, if we design the application with the above guidelines in mind and make the output explainable to the user, we can at minimum have 4 levels of recommendations (shown as tiles in application),  along with an evidence on why a recommendation is being provided – 

  1. Recommending restaurants based on earlier restaurant spends ,ratings, history and preferences of the user
  2. Recommending similar restaurants which are highly rated and less costly based on ratings, history and preferences of the user
  3. Recommending new restaurants based on user history and preferences of the user
  4. Recommendations generated by the system without applying any user preferences.

The revised application now provides various recommendations and enough evidences to back up the recommendation and ultimately the choice is left to the user to pick up the restaurant and book a table.

The above was an example of a very simple application, but imagine when AI is deployed across industries and in government agencies, then developing and monitoring the AI system for ethical principles would be extremely critical. Both the creators of the model, as well as validators (agencies/third party systems etc) validating the model would be critical to ensure AI models are fair, ethical and unbiased. 

As we are creators and validator of the AI system, the onus lies on us (humans) to ensure technology is used for good.

read more
1 2 3 4 5
Page 2 of 5