Tracking the AI Disruption: Impact and Benefits for Businesses

The main objective of this thread is to monitor the progress of the disruption caused by AI innovation and its impact on the businesses that will be affected or benefited by it.

Since 1997, I have been actively involved in the IT services sector and have personally witnessed two major innovations that caused disruptions. The first one was the immense power unleashed by the introduction of the Internet, which globalized the world and led to the widespread use of distributed software in businesses, optimizing and accelerating their reach. Those who failed to adopt these changes were overtaken by competitors who embraced these innovations early and effectively. An iconic example is Netflix, which disrupted and ultimately replaced the traditional blockbuster business.

The second major innovation was the emergence of public cloud computing services like AWS, Azure, and Google Cloud. Cloud computing revolutionized the IT landscape by reducing server provisioning time from months to minutes and offering a pay-per-use model, eliminating the need for upfront capital investments. This gave rise to a startup boom, empowering college students to become entrepreneurs.

Now, I am witnessing the third wave of disruption caused by Artificial Intelligence (AI). AI, machine learning, and technologies like quant have been around for nearly two decades, but recent advancements in language models like ChatGPT, Claude, Llama, and Bard are accelerating these disruptions even further.

In the AI era, companies with access to vast amounts of data will have a significant advantage. Giants like Google, Microsoft (with LinkedIn and other acquired data), Meta and Amazon (with extensive retail data) are already positioned as major beneficiaries. Companies with access to substantial compute power can scrape data from sources like Twitter and Stack Overflow. In India, organizations like NSE (trading data) and UIDAI (demographics data) are among those with reasonable data access.

Software product development companies operating in this space and large corporations will benefit from reduced reliance on human resources through AI technologies. On the other hand, certain professions and businesses will face heavy disruption, including content writers, designers, low-level software developers (websites, mobile applications), data analysts, sample data creators, and call centers. IT companies busy in providing IT services that fail to innovate risk being commoditized and marginalized.

The impact on the job market, particularly in the mentioned areas, could be profound. It’s essential to understand which sectors, companies, and professions will benefit from these innovations and which ones will be negatively disrupted. Tracking the impact and benefits of these changes will be crucial.

Bloomberg has been doing to optimize its investment and research businesses using AI and machine learning, including the development of BloombergGPT, a 50-billion parameter language model designed for finance. References

[Humans in the Loop: AI & Machine Learning in the Bloomberg Terminal | Bloomberg LP]
https://www.bloomberg.com/company/stories/humans-in-the-loop-ai-machine-learning-in-the-bloomberg-terminal/)

Introducing BloombergGPT, Bloomberg’s 50-billion parameter large language model, purpose-built from scratch for finance | Press | Bloomberg LP

Concluding this by declaring that the topic is edited by ChatGPT, and I am surprised by the enhancements made by it.

14 Likes

AI Disruption Unleashed: The Evolution of Quant Analysis and AI in Investment Business

Flashbacking 20 years ago, Jim Simons, the renowned investor and mathematician, pioneered the use of AI and historical data to predict future stock prices. His groundbreaking idea of merging historical data with live market information revolutionized quant analysis, enabling highly successful forecasts. Implementing this predictive system in his fund, Renaissance Technologies, Simons automated buy and sell orders based on these predictions, earning exceptional returns for investors and earning him the title of the father of quant analysis.

Reference: Book - Jim Simons, The Man Who Solved the Market.

Fast forward to the present, AI has taken a quantum leap in the past two decades, with significant advancements in Data, Compute Power, and implementation of ML Algorithms. Embracing this transformative progress is Bloomberg, a leader in the industry with access to all three components. Bloomberg’s substantial investments in AI have led to a series of cutting-edge initiatives:

  1. BloombergGPT: A groundbreaking 50-billion parameter language model tailored for finance, enabling tasks such as generating financial reports, crafting news articles, and providing precise customer responses.

  2. Bloomberg AI-Powered Document Search: An advanced AI-driven search engine capable of indexing and swiftly retrieving documents, empowering users with efficient access to information.

  3. Bloomberg AI-Powered News Analytics: A suite of AI-powered tools analyzing news articles to unearth trends, relationships, and valuable insights from vast news data.

  4. Bloomberg AI-Powered Trading: A comprehensive suite of AI tools empowering traders to make astute decisions, including identifying trading opportunities, managing risk, and executing trades with greater efficacy.

With Bloomberg harnessing AI’s potential, the investment landscape is undergoing a profound transformation, revolutionizing investment strategies and reshaping the future of finance.

2 Likes

Generative AI: The Inflection Point of AI Disruption and Innovation

In the world of innovation, breakthroughs become true disruptors when they encounter a pivotal moment, triggering a snowball effect that propels them forward with tremendous momentum. Although AI is not a new concept, the introduction of ChatGPT, a generative AI tool by OpenAI, marked a transformative inflection point. This milestone brought AI implementation to the masses, making it accessible without requiring programming knowledge. On April 6, 2023, ChatGPT was launched for free consumer use, and its impact was nothing short of astounding. Here, I’ll make an attempt to list a few examples:

  1. Between April and July 2023 (current date), a remarkable shift in user behavior was observed. The traffic on Google search and Stack Overflow experienced significant declines, with Google search witnessing drops of 12.9%, 15.3%, and 17.7% in April, May, and June, respectively. Concurrently, Stack Overflow’s traffic declined by 16%, 18%, and 20% during the same months. Bard, the generative AI tool by Google, provided these compelling statistics.
    Google had long been the primary search engines for the masses and Stack Overflow for the programmers.

  2. The positive response to ChatGPT sparked intense competition, leading to the introduction of four major generative AI tools: Bard by Google, Claude by Anthropic, Llama by Meta, and OverflowAI by Stack Overflow. Apple and Amazon are also actively working on their AI innovations, planning to integrate them with their voice assistants Siri and Alexa, elevating their usage to new heights. Additionally, Elon Musk’s xAI company entered the field to further advance AI capabilities.

  3. Software solution providers swiftly integrated generative AI capabilities into their core applications. Microsoft, for instance, seamlessly incorporated into its Office products and Bing search, revolutionizing the user experience. Google also integrated generative AI capabilities with Google Docs and Gmail, streamlining search and response processes.

  4. Large enterprises in media, investment, and manufacturing sectors proactively invested in and adopted AI. Bloomberg serves as a prime example, leveraging AI to optimize investment and research businesses. AI-based virtual news anchors and AI-driven telecallers are already making significant impacts in the market.

  5. Software product development companies recognized the potential of these powerful tools and have launched numerous applications atop LLMs, gaining substantial traction. As a consultant, I’m actively involved in architecting one such solution.

The introduction of ChatGPT marked a significant inflection point, unleashing a new era of AI disruption that has only just begun. Exciting times lie ahead as the impact of generative AI continues to unfold.

8 Likes

It’s a very interesting thread. I think most of us are eager to learn and update ourselves from the persons who have deep domain knowledge in AI. Appreciate if you please let us know which are the Indian IT companies who are involved in such AI related work with US companies and are ahead of others, or have close tie up with US companies and expect to gain substantially in the near future. Thanks. Samir Ghosh.

1 Like

Unlocking AI Potential in India: Seizing Missed Opportunities and exploring Untapped potential opportunities for India

This article delves into the untapped potential of AI in India, highlighting missed opportunities and exploring the possibilities for future growth. It focuses on the prospect of Indian companies developing powerful tools like LLM (Large Language Models) and examines two distinct development categories: Core product development, exemplified by ChatGPT, and satellite product development, which harnesses the power of core products. Developing both types of products requires three key ingredients: Data, Infrastructure (Compute & Storage), and Machine Learning (ML) Algorithms.
ML algorithms are relatively easy to implement, with generative AI tools providing source code and open-source communities like Hugging Face offering readily available resources. India boasts an abundance of talented Python developers who can implement these algorithms from scratch if needed. The infrastructure requirement can be fulfilled by utilizing public cloud services, which are readily available in India. However, core product development necessitates diverse historical data across various verticals to achieve optimal effectiveness, an aspect in which most Indian companies face challenges. On the other hand, satellite product development for specific businesses and use cases holds great potential. Let’s explore examples of Indian companies with access to valuable data that can be harnessed to create AI applications:

  1. Insurance - In the insurance domain, Insurtech platforms like Policy Bazaar, Insurance Dekho, RenewBuy, Turtlemint and Zopper possess a valuable asset – consumer policy data. By harnessing this data, they can achieve remarkable advancements, including predicting insurance requirements, cross-selling, upselling, and automation. A pivotal step in this transformation can be the development of an intelligent and context-specific chatbot application, utilizing historical policy data to enhance the efficiency of insurance agents and sellers. For instance, if an agent could inquire, “Tell me the list of motor policies that are expiring in the next 15 days and whose premium is more than 10,000.” “or “Tell me the list of customers that have motor insurance but not the health insurance” The application would promptly process the data and furnish a list of relevant policies that meet the specified criteria. Current products give the selective options and format while what agent requires is the more freedom and options in the native language.

  2. Finance - NBFCs (Non-Banking Financial Companies) boast vast cross-bank and consumer data, ideal for predicting customer profiles and financial requirements. Fintech companies like Bajaj Finance and Jio Finance can capitalize on this opportunity by developing AI-driven tools for personalized financial advice and risk assessment.

  3. Retail Consumer Data - Indian retailers like Jio, Dmart, Tata Retail, Zomato, CarDekho can use their data to predict consumer behavior and improve retail operations. This is long pending if not already implemented.

  4. Government of India (Human Welfare) - Thanks to JAM (Jandhan accounts + Aadhar + Mobile), the government has structured data that can be used for demographic and income predictions, benefiting multiple purposes. The government can encourage the entrepreneurs to develop AI applications to predict the success of social welfare programs and allocate resources more effectively.

  5. Government of India (Legal support) - Leveraging non-structural historical data on court judgments, AI-based products can be developed to offer invaluable assistance to individuals grappling with legal complexities through chat-based interactions. By implementing a legal assistance chatbot, individuals can receive expert guidance on legal procedures and relevant case laws with ease. Envision a ChatGPT-like application designed specifically for legal judgement in India, enabling common citizens to make plain English queries. For instance, they could inquire, “List down the cases in which a person was acquitted under section 1954 of IPC.” Such a sophisticated chatbot holds the potential to simplify access to legal information, empower individuals, and navigate the intricate realm of law with confidence.

Over the past two decades, Indian IT service companies have been held back in the race for innovations including AI due to a conservative approach and limited investment in R&D—an ongoing concern. However, despite this, the potential for satellite product development remains boundless, offering Indian entrepreneurs countless opportunities to create innovative solutions. With the right data and focused efforts, Indian companies possess the ability to achieve significant strides in the world of AI, unlocking its full potential and paving the way for a brighter future.

Notably, collaborations like Infinite AI partnering with chartered accountants to develop AI-based products showcase the transformative impact such ventures can have on enhancing efficiency and relevance in their respective fields.

4 Likes

in the age of AI, perhaps those who use the tools rather than those who make them might end up making more money. Somewhat like shovel sellers made more money than the gold miners during the gold rush.

2 Likes

Here is some experiment that I was doing on ChatGPT, with a few results.

What I did? - Just copy the data and add the prompt at the last for the information that I was looking for.

ChatGPT gave me insightful information about the data a result that I was looking for it. I loved it, but it is not a replacement for reading the con-call or annual reports.



2 Likes

> How was quantitative analysis, machine learning, and AI implemented in the real-world trading environment?

Vidoe on the birth of quant funds by George Soros and his thought process.

Summary

Profitable Hedge Fund: Renaissance Technologies is historically the most successful hedge fund with remarkable performance records, charging high fees.

Mathematical Models: The firm’s success is attributed to advanced mathematical models and powerful computers for trading.

Jim Simons - Founder: Jim Simons, a renowned mathematician, introduced unique research and model-building methods in hedge funds.

Early Ventures: Simons started successful businesses while in academia, showing an early inclination toward wealth creation.

Algorithmic Evolution: The fund transitioned from simple mean reversion strategies to complex mathematical models and machine learning.

Medallion Fund: Implementation of mathematical models and strategies led to the creation of the Medallion Fund, consistently yielding high returns.

Scientific Approach: RenTech’s success lies in its use of the scientific method to discover, validate, and trade on patterns and anomalies.

Accelerating Business Potential: AI Automation for Insurance Intermediaries

This article investigates the potential business scenarios that can be effectively implemented through AI automation by insurance intermediaries. Over the last half-decade, insurance intermediaries have witnessed promising opportunities due to the underpenetrated insurance market and its growth prospects. Initially, intermediaries focused on providing unified web and mobile interfaces for agents to sell insurance policies from various providers, leading to significant business growth. However, as the volume increased, new opportunities arose due to the vast amounts of insurance policy data gathered. In this article, we will explore the next-level business cases that can hyper-automate operations and reduce operational costs for low-margin insurance intermediaries, enabling them to become operationally profitable.

1. Automation of Converting Unstructured Policy Data to Structured Data:

With the growing volume of insurance policies, the manual or semi-automatic process of converting policy data from PDF format to structured data has become a time-consuming and error-prone bottleneck for many insurance intermediaries. Addressing this challenge presents a significant opportunity to optimize operations and reduce processing time. A Man + Machine solution can be developed, leveraging ML models and NLP solutions. Over time, this model can continuously improve through reinforcement learning, delivering accurate results. Automated retraining of the model ensures its relevance and superiority as new incremental policy data is incorporated, reducing data ingestion time and human resource requirements while increasing accuracy and cost savings for intermediaries.

2. Intelligent, Context-Aware, Native Language Personal Assistant:

In the multiparty B2B insurance intermediary business, individual agents interact with insurance providers and customers through an online application developed by the intermediary. Currently, agents often rely on human-intervened communication systems to extract relevant information to enhance business and service levels. However, there is significant room for improvement in this process. Developing an interactive application that serves as an intelligent personal assistant for agents can revolutionize this aspect. The agent could inquire in their native language, prompting queries such as:

  • “Give me the list of quotes shared with customers but not yet booked as policies?”

  • “Provide details (name, policy number, contract number) of motor policies expiring today but not yet renewed.”

  • “List customers with health insurance coverage lower than the IDV of motor.”

By adopting AI-driven personal assistant with limitless interactive features, intermediaries can attract more agents to the platform and empower existing agents to excel in their business.

3. Automation of Calculation, Payment, and Reconciliation of Commissions for Insurance Intermediaries:

The insurance intermediary business relies on earning the difference between commissions received from insurance providers and commissions paid to agents for the policies booked. However, the commission calculation process is highly dynamic, influenced by various factors such as preferred product targets, geography targets of insurance providers, special situations (e.g., COVID-19), and agreements between intermediaries and insurance providers. These complexities lead to frequent changes in commission structures, making the process challenging and error-prone.

Additionally, certain policies may be canceled by customers, resulting in full refunds and zero commission for agents. Conversely, insurance providers may decline policies based on prevailing regulations, requiring deduction of commissions. Compliance with regulations adds further constraints to the commission-sharing process.

The ever-changing behavior of agents prompts intermediaries to launch new schemes, further complicating commission calculations. Reconciling commissions with multiple insurance providers and managing follow-up reminders for discrepancies becomes a continuous, manual, and laborious task. It is crucial to align the commission payment schedule to avoid impacting cash flow and intermediary float.

Without automation, intermediaries either risk leaking commissions or must invest heavily in skilled financial manpower to perform accurate calculations. To overcome this bottleneck, an advanced AI-driven product with water-like flexibility is required to optimize the insurance intermediary business.

An AI-driven solution can address these challenges and revolutionize commission management. By leveraging machine learning algorithms and intelligent automation, the proposed product can:

  • Continuously adapt to changing commission structures and business dynamics.

  • Handle policy cancellations and deductions seamlessly, ensuring accurate commission calculations.

  • Monitor and adhere to regulations, streamlining compliance processes.

  • Provide real-time reconciliation with insurance providers, reducing manual efforts and errors.

  • Optimize commission payment schedules to ensure smooth cash flow and minimal financial strain.

  • Empower intermediaries to focus on core business functions by automating commission-related tasks.

  • Present comprehensive summary and detailed reports for finance, operations, sales, and compliance teams.

  • Empowers the leadership team with a real-time dashboard, offering a clear view of the intermediary’s float positions and commissions earned.

By embracing this advanced commission management product, insurance intermediaries can improve efficiency, reduce operational costs, and maximize revenue. This automation will free up valuable resources, allowing intermediaries to concentrate on enhancing customer experiences, exploring growth opportunities, and staying ahead in the competitive insurance landscape.

The three key business scenarios discussed in this article present transformative opportunities for insurance intermediaries to streamline processes, enhance agent efficiency, and optimize commission management. By automating the conversion of unstructured policy data, developing intelligent personal assistants, and revolutionizing commission calculations, intermediaries can unlock greater efficiency, reduce costs, and maximize revenue. Embracing AI-driven automation not only enables intermediaries to stay competitive but also empowers them to adapt to dynamic market conditions and ever-changing customer demands. As we explore the potential of AI automation in insurance intermediaries, the path to operational profitability becomes clearer. By harnessing the power of AI, insurance intermediaries can become agile, customer-centric, and strategically positioned for future success in the dynamic insurance industry.

1 Like

Human in the Loop: The Indispensable Role of Humans in an AI-Driven World

As AI continues its rapid advancement across various industries, there is a growing concern about the future role of humans. While it is evident that AI and automation will replace humans in many worker activities, it is crucial to recognize the areas where human expertise remains irreplaceable. Let’s explore the unique capabilities that set humans apart from AI and understand how the “Human in the Loop” approach can lead to a more successful and sustainable AI-driven world.

1. Emotional Intelligence

One of the fundamental differences between humans and AI lies in emotional intelligence. Emotions play a significant role in complex decision-making, especially in ambiguous situations. AI, although highly advanced, lacks the ability to genuinely understand and empathize with emotions. This limitation restricts AI’s effectiveness in tasks that require nuanced judgment and empathy. Professions involving human interactions, such as medical doctors and counselors, heavily rely on emotional intelligence to provide personalized and compassionate support to individuals.

2. Hyper Creativity

Creativity is a domain where human ingenuity outshines AI. While AI can generate music, art, and writing to some extent, it lacks the depth of creativity and originality that humans possess. Innovations, breakthrough ideas, and out-of-the-box thinking are still areas where human minds excel, driving progress and pushing societies forward.

3. Ethical Decision-Making:

Ethical decision-making is a critical aspect of various fields, including law, medicine, and research. Humans possess the ability to consider moral dilemmas, cultural nuances, and long-term consequences, enabling them to make value-based judgments that AI struggles to comprehend. AI can process data efficiently, but it lacks the ethical framework and human context necessary for making complex ethical decisions.

4. Accountability

Accountability is a major concern in AI-driven systems. AI cannot be held responsible for its actions, leading to potential challenges in ensuring accountability for the consequences of AI-driven decisions. Humans, on the other hand, can be held accountable for their actions and decisions. This aspect becomes crucial in industries where accountability is essential for building trust and maintaining safety, such as finance, healthcare, and content moderation.

Human in the Loop: Real-Life Examples

1. Fraud Detection in Financial Transactions

In the finance industry, fraud detection is paramount to protect customer accounts from unauthorized activities. While AI plays a significant role in detecting potential fraudulent transactions based on patterns and anomalies, there are scenarios where human intervention becomes essential. When an AI-driven fraud detection system flags a transaction as suspicious, human analysts step in as moderators to carefully review and validate the flagged transactions. Human analysts’ expertise allows them to analyze complex patterns, industry trends, and customer behavior, adding an extra layer of judgment and empathy to the process. This ensures that the final decision on fraud detection is not solely reliant on AI algorithms, reducing the risk of false positives and false negatives.

2. Medical Diagnosis and Human Doctors

In healthcare, AI assists medical doctors in diagnosing diseases and recommending treatments by analyzing vast amounts of medical data. However, human doctors bring essential expertise and experience to complex and rare medical cases. They consider various factors, such as a patient’s medical history, lifestyle, and symptoms, to make personalized and precise diagnoses. Moreover, the doctor-patient relationship is vital for patient care, and human doctors excel in empathizing and establishing trust with patients.

  1. Content Moderation on Social Media Platforms

Social media platforms rely on AI algorithms to monitor and moderate user-generated content. While AI can efficiently flag certain types of content that violate guidelines, it can struggle with context and nuance. Human moderator step in as the “Human in the Loop” to make the final judgment, considering cultural understanding and context that AI might miss. Human moderation ensures more accurate content filtering and prevents unnecessary censorship.

As AI continues to reshape industries, the indispensable role of humans in an AI-driven world becomes evident. Emotional intelligence, hyper creativity, ethical decision-making, and accountability are human capabilities that set us apart from AI. Embracing the “Human in the Loop” approach allows us to leverage AI’s efficiency while benefiting from human judgment, empathy, and ingenuity. By combining the power of AI with human expertise, we can create a more successful and sustainable AI-driven future in various fields, ensuring better outcomes and building a world where humans and AI can coexist harmoniously.

1 Like

Could you provide more details about this? I might be able to assist you in discovering more suitable tools, possibly within the fee-free domain, that align with your needs.

1 Like

Great points about AI in healthcare. The lowest hanging fruit in my opinion is pathological and radiological reporting. These fields essentially involve pattern recognition, which machines can do faster and better than humans if trained adequately.
Diagnosis and medical treatment is the next domain where ML can add a lot of value. It can remove a lot of biases, including decision-making based on economic incentive. Also, the memory and recall, especially for obscure conditions, are incomparable. Both over-treatment and incorrect treatment incidences can be reduced significantly.
Surgical domain, in my opinion, could take longer to disrupt, but it should happen eventually. Robotic surgery currently just involves robotic arms controlled by trained surgeon. It carries the advantage of better visualisation and more degrees of movement with no tremors. But as the technology evolves, if we can incorporate MR imaging into the visual field, and train the machines in anatomy and pathology, true robot-assisted surgery should not only be possible, but be much more accurate with better outcomes and lower complications than the current standard.
In the Indian context, the problems we face with regard to shortage of doctors can also be addressed with AI. The problem isn’t just shortage, but a totally skewed distribution. While rural areas are grossly underserved, big cities are over-served, leading to cut-throat competition and rampant malpractice. This is because people don’t want to live in areas with poor infrastructure, lower education standards for their kids, and less opportunities in general. Not only can AI make remote care possible, it could also tremendously improve the productivity of existing professionals, making healthcare more inclusive.
Can machines ever completely replace doctors though? If there is ever a breakthrough in AGI, then why not? But I suppose we are far away from that.
All in all, there are a number of exciting possibilities for AI in healthcare. Needs to be seen how many of these become a reality, and how soon.

3 Likes

This is awesome, thank you for offering.

I copied the data from the screener and in Chat GPT I used the prompt for Generative AI to get an analysis done.

@Mayank_Bajpai - chatgpt is not the best suited tool for generative AI for your case as ChatGPT isn’t up-to-date with the real-time data, and thus you had to provide context manually. This may also have compromised the response from chatgpt. As an alternative, consider trying out https://bard.google.com/ for your generative AI needs that requires live data.

I employed the following prompt on Bard: “Provide me with the cashflow analysis of Sun TV from screener.in.”
Here’s the link of the response: Bard Response.
I haven’t yet analyzed the output from Bard, so please let me know if this solution works better for you.

4 Likes

Awesome, Malay! Thank you so much! I will try playing with this. Please keep sharing such wonderful ideas.

I’m pleased that it could be beneficial. If you’re interested in further insights and techniques, you might want to think about joining my Quora space at: https://artificialintelligenceasymmetricbetbyhumans.quora.com/?invite_code=nM83dPHbUJpm2MecUsaG.

Awesome, sure. I will join the forum

Seems like bard does not give the accurate data…example, ask for Astral Q1 FY24 results analysis YoY and QoQ from screener.in

@Mayank_Bajpai - Indeed, Bard might not always be perfectly accurate as it’s still in the process of being refined for production-grade quality. I’ve been considering various alternatives that might be beneficial for your needs.

You have a few options to consider, with the first three being free and the fourth being a paid solution:

  1. Utilize a screen capture tool to capture the result’s contents and then paste them into ChatGPT. While this isn’t the most optimal solution, it can serve as a workaround when needed.

  2. Explore Claude, an alternative to both Bard and ChatGPT. Claude offers the unique feature of attaching documents as context. This means you can provide Claude with a results document or annual reports to read and process before generating a response. By simply dragging and dropping the document into the chat window, Claude processes it and makes it available for reference. This enhanced context can lead to more accurate and informative responses, useful for tasks such as summarization, answering questions, and generating creative content. Claude is currently available in beta to users in the United States and the United Kingdom, and may be accessible via VPN. Anthropic, the creator of Claude, plans to expand its availability to more users in the future.

  3. Utilize Google Drive, which supports using PDFs as context for generative AI. You can upload a PDF file of results or annual report to Google Drive, open it in Google Docs or Google Slides, and then click the “Generate” button in the toolbar. This will prompt Bard to generate text based on the PDF’s content. You can also use the “Open with” menu in Google Drive to employ third-party generative AI models like ChatGPT or Claude.

  4. If you’re willing to invest financially, ChatGPT Plus offers access to live internet data (as per reports). This subscription service costs $20 per month and could potentially enhance your experience.

I hope these alternatives prove helpful for your requirements!

4 Likes

What is the impact of Data protection Bill 2023 of Government of India on Machine learning application development?

This article delves into the challenges associated with the development of customer-centric machine learning applications due to the implementation of the Data Protection Bill 2023. The passage of this bill by the Government of India (GOI) is a significant step towards safeguarding clients’ personal data, which was much needed given the rampant mishandling and misuse of such data by companies and government bodies. However, the unintended consequence of its impact on the progress of machine learning applications cannot be ignored.

Functioning of machine learning applications requires delving into how these applications work. At the core of a machine learning application is the model, which is built using historical data. The effectiveness of the model is heavily reliant on both the quantity and quality of the data used to create it. While it is possible to create a model using synthetic data, creating the model with actual data is preferable for better functioning. The model must also be consistently trained with additional data to ensure its relevance.

Although the data used to create the model cannot be reverse-engineered to extract the original data, it cannot be disregarded that the model’s foundation lies in consumer data. The Data Protection Bill introduces two key rules that impact the process of model creation for machine learning applications. For simplification, let’s discuss this using the example of an insurance intermediary.

1. Limit the use of personal data to its intended purpose:

Under this rule, insurance intermediaries are confined to using collected personal data only for the purpose it was initially gathered. For instance, data collected to provide an insurance quote cannot be repurposed for any other objective, including the development of machine learning models. This presents a challenge wherein intermediaries would need consent from their customers’ customers to use their data for building models. Navigating this challenge for both existing and future models is a substantial hurdle, requiring careful scrutiny of the bill’s directives by legal teams. Considering that intermediaries are not using their clients’ (Agent) personal data but rather their agent’s client personal data adds a new layer of complexity. Obtaining consent across two nested levels while ensuring transparency as mandated by the bill presents a considerable challenge.

2. Delete personal data when no longer needed:

Insurance intermediaries bear the responsibility of eliminating personal data that has fulfilled its designated purpose or has become obsolete due to irrelevance, inaccuracy, or lack of necessity. Furthermore, in the event of a client’s data deletion request, intermediaries are obligated to comply. This raises the question of whether intermediaries need to revise their machine learning models by excluding the data that was originally used to create them but is no longer consented to.

If this is indeed the case, it introduces a significant shift in the dynamics of constructing and maintaining machine learning models. For instance, if an intermediary utilized my data, obtained with my consent, to develop a machine learning model, they must erase my data upon the expiration of my policy, my cessation as a client, or my request for data deletion. Should this necessitate the model’s recreation by excluding my data, such a task would demand substantial computational resources and a revised approach to data processing, model development, testing, and deployment, potentially requiring frequent periodic updates.

Ultimately, the delicate balance between data protection and the advancement of machine learning applications requires careful consideration. The path forward involves proactive adaptation to comply with the bill’s regulations while sustaining innovation in the ever-evolving landscape of machine learning.