Decorative image for IP related risks

How to identify AI-related IP risks in your startup or product

Artificial Intelligence (AI) products offer exciting opportunities, but hidden within them are serious intellectual property (IP) risks that can threaten your startup’s future. From copyright infringement in training data to patent risks, trade secret leaks and open-source license traps, these dangers are often overlooked by founders until it is too late. 

Introduction

Kumar leaned back in his chair, rubbing his forehead, staring at his laptop screen in frustration. His AI-driven health tech startup had just finished a call with a potential US investor, and one casual question from their legal counsel had left him puzzled.

“They asked if we have done an AI IP risk assessment,” Kumar muttered, shaking his head. “What on earth is that supposed to mean? I thought our biggest risk was the model making wrong predictions… but now they are talking about intellectual property risks in AI?”

Across the table, Lily smiled. As an intellectual property lawyer, she had heard this concern from at least five different startup founders this month alone.

“Kumar,” she said, setting down her coffee, “you are not the only one confused. Most founders building AI products are focused on their tech, their features, and their market fit. But they forget that AI brings a new set of legal risks, especially with intellectual property. These risks are invisible at first, but they can hit you hard when you least expect it. A legal notice… a takedown demand… even a lawsuit.” 

Kumar raised his eyebrows. “Lawsuit? For what? For using AI? Is it really that serious?”

Lily leaned forward, lowering her voice. “Yes. AI-related IP risks are very real, and they are very different from ordinary software risks. Think about it. Your AI product deals with four things most software does not handle so dangerously: data, models, outputs, and open source components. All of these can create copyright, patent, trade secret, or trademark risks if you are not careful.”

“But we are just a small Indian startup,” Kumar said, puzzled. “Why should we worry about this now? Does Indian law even have rules for AI and IP yet?”

Lily smiled knowingly. “You are right, Indian law is still catching up. But the main intellectual property laws, the Copyright Act, the Patents Act, and the Trade Marks Act they already apply to AI, even if indirectly. And if you plan to raise funds from abroad, sell your product globally, or partner with foreign companies, they will expect you to have sorted this out. In the US, in Europe, even in China, IP risks in AI are already under the microscope.”

Kumar ran a hand through his hair, sighing. “So what exactly are these IP risks? Are they only about data?”

“Not at all,” Lily said, pulling out a notebook from her bag. “There are five big ones you need to know:

First, copyright risks are due to the data you use for training, and also the outputs your AI generates.

Second, patent risks if your model or method is already patented by someone else.

Third, trade secret risks, for example, if your AI accidentally leaks confidential information.

Fourth, trademark risks, like if your AI generates someone else’s brand or logo by mistake.

And fifth, open source license risks because many AI tools and libraries come with strict open source license terms you must follow.”

Kumar’s eyes widened. “Wait. Even the data I use to train my AI could be a risk?”

Lily nodded. “Yes. You would be surprised how messy this can get. Look at what happened to Stability AI in the UK. They are being sued by Getty Images for allegedly scraping millions of copyrighted photos without permission to train their image generator. Or the US case Thomson Reuters v. Ross Intelligence, where a legal AI startup allegedly used copyrighted legal databases to train their system without consent.”

“Yikes,” Kumar said, sitting up straight. “But nothing like that has happened in India yet, right?”

“Not yet,” Lily said, flipping a page in her notebook, “but the principles are the same. Under the Indian Copyright Act, 1957, using protected data without permission is infringement, no matter how clever your AI model is. And if you ever sell abroad, foreign laws like the US Copyright Act or the coming EU AI Act could affect you as well.”

Kumar groaned softly. “Great. Another thing to worry about.”

Lily smiled. “Do not worry. I will walk you through it. The key is to identify these risks early before they explode. Let us start with the most common problem area: your AI’s training data.”

Kumar stirred his cold coffee absentmindedly, his mind racing.

“Lily, this is starting to sound serious. But can you explain these risks properly? I mean… what exactly do you mean by copyright risk or patent risk in an AI product? You keep mentioning these names, but I need to understand what they actually mean for my startup.”

Lily smiled and leaned back in her chair.

“Fair enough. Let us take them one by one, and I promise I will not use legal jargon.”

She pulled her notebook closer.

  1. “First up, copyright risks. This is probably the biggest and most common trap for AI startups. Why? Because your AI system needs data to learn lots of it. That training data often includes text, images, music, and code, all of which could be protected by copyright. If you use such data without permission, you could be infringing someone’s copyright, even if the data came from the internet.”

Kumar frowned. “But hang on, the internet is full of free data. If something is on Google, surely it is free to use, right?”

Lily shook her head firmly. “No, Kumar. That is the biggest myth. Just because something is on the internet does not mean it is free to use for commercial AI training. Take the Getty Images v. Stability AI case, I told you about Getty’s claims that Stability AI scraped their copyrighted photos without consent to train an AI model. That is a classic example of training data copyright risk. The same thing can happen in India under the Copyright Act, 1957, using protected content for commercial purposes without permission is infringement here too.”

Kumar scribbled a note. “Alright. Copyright risks, I get it. What is next?”

  1. Patent risks,” Lily said, flipping the page. “Did you know that many AI algorithms are patented, especially in the US and Europe? If you build your AI system using a patented method or process without a licence, you could be infringing a patent even if you did not copy it knowingly.”

Kumar’s mouth fell open slightly. “You mean someone could have patented the AI algorithm itself?”

“Exactly,” Lily said. “For example, Google and IBM hold hundreds of AI-related patents covering neural network designs, data processing techniques, and even specific machine learning methods. In India, the Patents Act, 1970, does not allow pure algorithms to be patented, but the US and Europe do. So if your product sells abroad or uses international cloud platforms, you might still be exposed to their patent rights.”

Kumar shook his head. “Great. So what else?”

  1. Trade secret risks,” Lily continued. “This one is tricky. Suppose your AI model accidentally learns or reproduces confidential data from another source, say, a partner’s database or scraped private material. You could be sued for misappropriating trade secrets.”

He blinked. “But AI models do not ‘know’ they are leaking trade secrets, right?”

Lily chuckled softly. “Of course they do not, but the law does not care. If the output reveals confidential or protected information, you are liable. In India, trade secret protection is not codified like in the US Defend Trade Secrets Act, 2016, but courts here recognise breach of confidence and contract terms. A famous international example was when OpenAI’s GPT models accidentally regurgitated memorised training data, some of which contained private details.”

Kumar scribbled again. “Okay… copyright, patent, trade secrets. What else?”

  1. Trademark risks,” Lily said, ticking off her list. “Imagine your AI text generator produces fake ads mentioning ‘Nike’ or ‘Apple’ or creates a fake logo that resembles Starbucks. That is trademark infringement, your startup could get sued even if it was the AI that generated it.”

“That actually happens?” Kumar asked, wide-eyed.

“All the time abroad,” Lily said. “There have been cases of AI generating celebrity faces or known brands without permission. In India, under the Trade Marks Act, 1999, this could count as infringement if it confuses consumers or dilutes the brand value.”

Kumar groaned. “There is more, is there not?”

  1. Lily laughed. “One last one, open source licence risks. Many AI models and tools like PyTorch, TensorFlow, and even parts of OpenAI’s software are open source. But open source comes with license terms. Some, like MIT or Apache licences, are permissive. Others, like GPL or AGPL, can force you to disclose your own source code if you use their software improperly in your product. A lot of startups ignore this and get into legal trouble later.”

Kumar scribbled frantically. “Alright. So five major risks are copyright, patents, trade secrets, trademarks, and open source. All these could hit an AI product?”

Lily nodded. “Exactly. And if you do not catch them early, they can blow up at the worst possible time during funding, acquisition, or international expansion.”

Kumar sighed. “Lily… you are ruining my weekend plans. But you are right. I need to do this risk check properly.”

She smiled kindly. “That is what I am here for. Let us start with the most dangerous area of training data. It is where most AI IP risks begin.”

Lily leaned forward, resting her elbows on the table.

“Kumar, the best way to handle these risks is not to panic; it is to spot them early. Like fixing a leaky pipe before the ceiling collapses.”

Kumar smiled weakly. “Sounds like my old apartment in Koramangala. Okay, tell me. How do I start this… risk spotting?”

Lily pulled out a fresh page in her notebook. “Let us break it down into five areas you need to check each one in your product. Ready?”

Kumar nodded, pen poised.

  1. Training data risks

Lily tapped her pen on the notebook. “The first and biggest risk comes from your AI’s training data. Where is your training data coming from, Kumar?”

Kumar thought for a moment. “A mix of open datasets, some scraped web data, some bought from third-party providers… and a bit of publicly available health data from research papers.”

Lily raised an eyebrow. “Did you check the copyright status of all those datasets? Are they under licences like Creative Commons, MIT, or open government data terms?”

Kumar blinked. “Honestly? No idea.”

Lily smiled gently. “Most startups forget this. But under the Indian Copyright Act, 1957, even facts and compilations of data can be protected if there is skill and effort involved, like a curated medical dataset. And scraping public websites without permission may violate their terms of service or copyright, like what Getty Images is claiming against Stability AI in the UK. You cannot assume ‘public’ means ‘free to use’; always check the licence.”

Kumar scribbled: ‘Make data provenance log. Check licences.’

Lily continued. “Also, you must record where your data came from, a proper ‘data provenance log’. Investors and partners in the US or Europe will ask for this.”

Kumar sighed. “Right. What is the second risk area?”

  1. Model risks

“Second,” Lily said, “your AI model itself. Did you build it fully in-house, or are you using any third-party models?”

Kumar shrugged. “We fine-tuned an open model from Hugging Face. Some TensorFlow components. Maybe some PyTorch.”

Lily smiled knowingly, “Classic startup stack. Did you read the licences of those models?”

“Um… they are open source. That is good, right?”

Lily shook her head. “Not always. Apache 2.0, MIT, these are fine. But if you touch anything under GPL or AGPL, you could be forced to share your own model’s source code if you deploy it. And some models have non-commercial restrictions, meaning you cannot sell your product unless you get a commercial licence.”

Kumar looked horrified. “I thought open source meant safe…”

“It depends,” Lily said softly. “Always check the licence terms before using or fine-tuning a model, even if it comes from Hugging Face, GitHub, or TensorFlow Hub.”

Kumar scribbled: ‘Check all model licences. Avoid GPL/AGPL if closed source.’

  1. Output risks

“What your AI produces can also land you in trouble,” Lily said. “Imagine your model writes fake medical advice using another company’s brand like ‘Dr. Reddy’s cures this disease or generates an image of a famous celebrity’s face. You could face trademark or publicity rights infringement.”

Kumar scratched his head. “But I am not telling the model to do that…”

“It does not matter,” Lily said. “You are responsible for the output. In the US, there are discussions about whether AI-generated outputs infringe copyright or personality rights. India may follow the same route, especially with the rising use of celebrity images, song voices, and product names in AI outputs.”

Kumar scribbled again: ‘Set output filters for brands, celebrities, logos, medical advice.’

  1. Patent infringement risks

“Patents are the silent trap,” Lily said. “Your AI’s method, like the way it processes medical data or predicts health outcomes, could be covered by someone else’s patent, especially in the US or EU.”

“But algorithms are not patentable in India,” Kumar said.

Lily nodded. “True, under the Indian Patents Act, 1970, pure algorithms and mathematical methods are not patentable. But abroad, they can be. And if you launch in the US or Europe or even use their cloud platforms, you could infringe their patents without realising it.”

Kumar’s face fell. “How do I check this?”

“You need a freedom-to-operate (FTO) analysis done by a patent expert to make sure your model’s methods do not step on someone else’s patented invention.”

He scribbled furiously: ‘FTO check before global launch.’

  1. Open source compliance risks

“Lastly, open source AI tools,” Lily said. “Many startups build using free libraries and models. But if you mix licences wrongly like combining GPL with MIT, you could create a ‘licence conflict’ that forces you to open source your own proprietary code.”

Kumar groaned. “Another legal headache.”

She laughed. “Do not worry, this one is easy if you run a regular open-source audit of your AI stack. List every library, model, or dataset you use and its license. There are even tools to automate this now.”

He made a note: ‘Regular OSS licence audit. No GPL unless ready to share code.’

Lily smiled and closed her notebook.

“That is your basic IP risk map, Kumar. Five points: training data, models, outputs, patents, and open source compliance. Miss any of these, and you are inviting trouble like lawsuits, funding delays, or reputation damage. Catch them early, and you are safe.”

Kumar sighed but smiled. “Lily, you just saved me from a disaster I did not even see coming. I owe you dinner.”

She laughed. “First, fix these risks. Then dinner.”

Common traps for startups

Kumar stretched and leaned back in his chair, letting out a long breath.

“This is a lot to take in, Lily. No wonder most founders I know never bother thinking about these things until they get a legal notice. But tell me… what are the most common mistakes you see? Like, the real traps that startup teams like mine fall into?”

Lily smiled knowingly. “Good question. There are three classic traps, Kumar. I see them again and again with AI startups both in India and abroad.”

She ticked the first point off on her fingers.

  1. “Trap number one: thinking that ‘open’ means ‘free for all’.”

Kumar frowned. “What do you mean?”

“I mean this,” Lily said patiently. “Many startups believe that if something is publicly available, like data on the internet or an open model on GitHub, they can use it however they want. But that is not true. Public availability does not mean public domain or free use. Everything has rights attached unless clearly waived by the owner. Remember the Getty v. Stability AI case? That is exactly what they are fighting about, unauthorised scraping of copyrighted material.”

She leaned forward slightly. “In India, too, under the Copyright Act, 1957, even online data is protected unless the copyright owner has released it under an open licence like Creative Commons. Scraping without checking the rights can make you liable.”

Kumar shook his head. “I definitely need to check where all our data came from…”

Lily smiled and ticked off the next point.

  1. “Trap number two, assuming that the AI model provider has covered all risks.”

“Oh,” Kumar said, eyes widening. “We did that. We thought if we use an open model from Hugging Face, it is their problem, not ours.”

Lily laughed softly. “Almost every founder says that. But when you deploy, fine-tune, or commercialise an AI model, even an open one, you are responsible for its legal compliance. The model provider usually gives it ‘as is’, disclaiming all liability. If that model was trained on stolen or unlicensed data, you could be the one who gets sued, not them.”

She raised her eyebrow. “Some providers like OpenAI or Google make their terms very clear; they will not protect you if you misuse their models.”

Kumar scribbled a big note: ‘Check model provider terms. No legal protection for us.’

  1. “Assuming that AI output cannot infringe rights because ‘the machine did it’.”

This is the last trap,” Lily said gently, “and is the most dangerous one, 

Kumar blinked. “But… that is true, is it not? The AI made the output, not me.”

Lily shook her head firmly. “Not according to the law. You, as the deployer or provider of the AI system, are legally responsible for its output. If your AI writes defamatory text, generates a logo similar to Apple’s, or spits out private medical data, you are liable. There is no Indian case yet, but in the US, rights holders are already sending takedown notices and even filing suits. And if your AI causes consumer confusion or damages a brand’s reputation, you could be sued under the Trade Marks Act, 1999 in India.”

Kumar sighed loudly. “So… the machine may be smart, but the legal blame falls on us humans.”

“Exactly,” Lily said, smiling.

“Any international examples of this?” Kumar asked.

“Well,” Lily said, thinking for a moment, “there was that case where AI image generators were producing faces of real celebrities without permission, triggering concerns about privacy and publicity rights, something that could easily be challenged under US law or even under India’s evolving right to personality. And there was concern when chatbots like GPT generated fake or misleading brand statements. This is why many big companies now have filters in place to prevent outputs that may cause legal risks.”

Kumar scribbled more notes.

‘Install output filters. Prevent brand, personal likeness misuse.’

He leaned back again, groaning softly. “Lily, my head is spinning. How does anyone keep track of all this?”

Lily smiled kindly. “You will not need to memorise it all. You just need a checklist one you can follow every time you build or deploy a new AI feature. Shall I give you one?”

Kumar grinned. “Yes, please. I love checklists more than legal notices.”

They both laughed.

Practical checklist: AI IP risk scan

Lily pulled out a small printed sheet from her folder and slid it across the table to Kumar.

“Here. Your very own AI IP risk checklist. Simple, clear, and meant for busy founders like you.”

Kumar picked up the sheet and grinned. “Lily, you really came prepared, did you not?”

She chuckled. “Of course. I knew you would ask for this. Every time I work with an AI startup, they want something like this. So here we go, five points you must check before you launch, pitch, or scale any AI product.”

She pointed to the first item.

  1. Is your training data clean and properly licensed?

“First,” Lily said, “where did your AI’s training data come from? Is every dataset properly licensed, open under terms like Creative Commons, or explicitly public domain? Did you scrape any data from websites without permission?”

Kumar thought aloud. “Some datasets we bought. Others we found from open government portals. And… we did scrape some articles from public health blogs.”

Lily raised an eyebrow. “That last part could be risky. You need to check if scraping was allowed by those sites’ terms of service, or you could be in breach of contract or copyright infringement, like in the Stability AI and Getty fight. In India, too, unlicensed use of copyrighted content for commercial purposes can be infringement under the Copyright Act, 1957.”

She smiled. “So the checklist point is simple”, and she showed the list to Kumar. It read:

  • Data sources logged?
  • Licences confirmed?
  •  No scraping without permission?”

Kumar scribbled: ‘Audit data sources. Make provenance log.’

  1. Are your AI models used under proper licences?

“Next,” Lily said, tapping the sheet, “you must check the licences of any AI models or libraries you use. Did you fine-tune a third-party model? Did you follow its terms, especially if it says ‘non-commercial use only’ or has GPL obligations?”

Kumar scratched his head. “We fine-tuned a model from Hugging Face… I should check if its licence allows commercial use.”

Lily nodded. “Exactly. Your checklist here: 

  • Model licences reviewed?
  • Commercial use allowed?
  • No GPL/AGPL surprises forcing disclosure?”

Kumar added: ‘Check every model’s licence. No blind reuse.’

  1. Is the AI output filtered for potential infringement?

“Third,” Lily continued, “you need to control what the AI produces. Could your AI generate someone else’s brand, logo, celebrity face, or private info? Are you using output filters or moderation?”

Kumar frowned. “We do not have strong output filters yet… but maybe we should. Especially for health claims.”

Lily smiled. “Good idea. In India, false or misleading commercial claims can also trigger action under the Consumer Protection Act, 2019, and trademark misuse under the Trade Marks Act, 1999. Abroad, it is even stricter. Your checklist:

  • Brand, logo, celebrity filters in place?
  • Sensitive content warnings active?
  • Medical or financial advice output checked?”

    Kumar wrote: ‘Install output moderation layer.’

    1. Have you done a patent freedom-to-operate (FTO) check?

    “Fourth risk patents,” Lily said. “Did you get an FTO check done to see if your AI method or process might infringe someone’s patent in the US, EU, or elsewhere?”

    Kumar winced. “No… we never thought about that. But we do plan to expand to the US soon.”

    “Then you must,” Lily said. “The checklist here:

    • Patent search done for the AI method?
    • FTO opinion from a patent lawyer?
    • No active patent blocking your launch in key markets?”

      Kumar sighed and wrote: ‘Engage patent lawyer for FTO check.’

      1. Are your open-source AI tools compliant?

      “Finally,” Lily said, “your open source tools. Have you reviewed the licences of all AI libraries like TensorFlow, PyTorch, and made sure you are not violating any terms by mixing incompatible licences?”

      Kumar smiled. “At least that part we have covered, we used Apache and MIT mostly.”

      “Good,” Lily said. “But check regularly. Your checklist here:

      • Licence list maintained?
      • No incompatible licence mixing?
      • GPL/AGPL used only with full compliance?”

        Kumar added: ‘Run OSS licence audit before each release.’

        Lily leaned back, smiling. “That is it. Your five-point AI IP risk scan. If you check these regularly, even once a quarter, you will avoid 90 per cent of the legal traps that catch AI startups.”

        Kumar stared at the sheet, then grinned. “Lily, you should turn this into a poster and sell it to founders like me. You have no idea how useful this is.”

        She laughed. “Maybe I will. But for now, keep this copy. It could save your startup someday.”

        Cases of AI IP risk

        Kumar leaned forward eagerly, tapping the checklist sheet.

        “Lily, this is super useful. But you keep mentioning these real cases from abroad. Can you tell me some proper examples of things that actually happened so I can explain to my co-founder why this matters?”

        Lily smiled, pulling out her tablet. “Of course. Here are some of the biggest real cases where AI startups and companies landed in IP trouble. These are not theories; they are real lawsuits and disputes shaping how AI IP risk is understood globally.”

        She tapped the screen and read aloud.

        1. Thomson Reuters v. Ross Intelligence (United States, 2020)

        “Here is another major one, we already looked at this, but let me tell you some more.” Lily said. “Ross Intelligence, a legal AI startup, was sued by Thomson Reuters owner of Westlaw, for allegedly using its copyrighted legal database to train their AI-powered legal research tool without permission.”

        “Legal tech is eating legal content. Irony,” Kumar murmured.

        Lily smiled. “Exactly. This shows that even text databases are protected under copyright, not just images or music. And that matters for India too, because under the Indian Copyright Act, 1957, databases created with skill and effort are protected as literary works.”

        1. OpenAI’s GPT regurgitation problem (United States, 2023)

        “Even OpenAI had trouble,” Lily said. “Some researchers found that GPT models, when prompted cleverly, could regurgitate chunks of their training data, including real phone numbers, names, and phrases from copyrighted books.”

        Kumar frowned. “So the AI was leaking what it memorised?”

        “Exactly. This raises privacy, trade secret, and copyright risks. OpenAI had to improve its models to stop this, but it shows that AI output can cause unintentional IP or privacy breaches.”

        1. The Samsung ChatGPT leak incident (South Korea, 2023)

        “Here is a corporate disaster story,” Lily said. “Samsung engineers accidentally pasted confidential chip design code into ChatGPT, seeking help with debugging. But that violated their internal confidentiality policy because OpenAI retains some data for model improvement unless explicitly disabled.”

        Kumar gasped. “You mean they leaked their own trade secrets… to OpenAI?”

        “Unfortunately, yes. This shows how careless AI use can cause trade secret exposure even if no hacking was involved.”

        Lily put down the tablet. “These are real warnings, Kumar. These cases are shaping global AI law, and Indian startups like yours will not stay untouched. Investors, partners, and foreign clients will expect you to be compliant with these evolving standards.”

        Kumar looked thoughtful. “It is funny. I thought AI risk was all about wrong predictions or bias… not legal battles over who owns the data or the output.”

        Lily smiled. “That is what most founders think until it is too late. But now you know better.”

        Conclusion: Why early action saves money and reputation

        Kumar sat back in his chair, tapping the checklist sheet with his pen. His coffee was long cold, but his mind was sharp now.

        “Lily,” he said softly, “this is a lot to take in. Honestly, I never thought AI could bring this many legal risks. We were just focused on building the best product. But now I feel like we have been walking through a minefield without knowing it.”

        Lily smiled gently. “Most AI founders do, Kumar. You are not alone. When startups rush to build features, scale fast, or raise funds, they forget that intellectual property risks are silent until they explode. But here is the good news.”

        She leaned forward, lowering her voice like sharing a secret.

        “If you catch these risks early during design, development, and testing, you can save your company from huge trouble. You can avoid takedown notices, lawsuits, investor rejections, or worse, a ruined reputation just before a funding round.”

        Kumar sighed. “We are pitching to two foreign VCs next month. I guess they will definitely ask these questions… just like that US investor did last week.”

        “Exactly,” Lily said. “Investors now expect AI startups, even small ones in India, to have done basic IP risk checks. They have seen these global cases. They know the risks. If you can show them that you understand these risks, have run an IP audit, and can prove your models and data are clean, they will trust you more. It might even help you win a deal against a competitor who ignored this.”

        Kumar smiled wryly. “So being careful with the law is also a competitive advantage now?”

        Lily laughed. “Absolutely. In fact, some US and EU investors will walk away from AI startups that have not done an IP risk scan. They do not want surprises during due diligence.”

        She tapped the table gently.

        “Remember, Kumar handling AI IP risks early is cheaper than fighting court battles later. A simple licence check today could save you crores in damages tomorrow. A proper training data audit could save you from an embarrassing public lawsuit. And a clean open-source policy could protect your proprietary code from being forced open by a GPL violation.”

        Kumar leaned back and smiled. “Lily, you have ruined my weekend plans… but you have probably saved my startup.”

        Lily grinned. “That is what friends and lawyers are for.”

        He held up the checklist. “Right. First thing Monday, team meeting. We are starting this AI IP risk audit. No excuses.”

        Lily raised her coffee cup. “To save AI, and to startups that do things the right way.”

        They clinked their cups and laughed.

        Frequently asked questions on AI and IP risks

        Kumar glanced at Lily as he finished his notes.

        “Lily, before I let you escape, just one last favour. Quick-fire round. I have some rapid questions that every founder probably wants to ask about AI and IP. Ready?”

        Lily laughed. “Go on then. I love a good rapid fire.”

        1. Who owns AI-generated content: me, the AI model maker, or no one?

        Lily smiled. “In India, under the Copyright Act, 1957, only human authors can claim copyright, not machines. So if your AI writes code, generates art, or composes music technically, you or your company own the rights, as the person who ‘caused’ the creation. The AI model maker, like OpenAI or Google, does not claim rights over your outputs, unless their license says otherwise. But this area is still evolving globally, watch for changes, especially from the UK and US.”

        1. Can open-source AI tools really create legal risks?

        “Definitely,” Lily said. “If you mix open source licences carelessly, for example, using GPL libraries in your proprietary product, you might be forced to open your own code publicly. Also, some models have ‘non-commercial’ or ‘research only’ clauses. Always check every open source licence and run a proper open-source audit before release.”

        1. What is a freedom-to-operate (FTO) check, and do Indian startups really need it?

        Lily nodded. “FTO is a patent clearance search to make sure your AI method or process does not infringe any active patents in markets like the US, EU, or China. Indian startups planning to export, launch apps globally, or raise international funding absolutely need this. Patents may not affect you under Indian law, but they can stop you abroad.”

        1. If AI creates infringing output by mistake, can I still get sued?

        “Sadly, yes,” Lily said. “As the AI system’s deployer, you are responsible for its output. If your chatbot writes something defamatory or your image generator creates a fake brand logo, you could be liable even if the AI did it on its own. Install filters, checks, and human moderation to prevent this.”

        1. Is this only a problem abroad, or should Indian startups worry too?

        “Indian law already covers copyright, patents, trademarks, and trade secrets, and courts will apply these to AI use too,” Lily said. “Also, if you sell, pitch, or get funded abroad, their laws will apply to you. So, yes, Indian startups must worry early.”

        Kumar smiled and scribbled the last note on his sheet.

        “Lily, you should put this on LinkedIn. Every founder here needs to hear this.”

        She laughed. “Maybe I will. But for now, you have your checklist. Use it well, Kumar. It will save you later.”

        He grinned. “Thanks to you, we will be the rare startup that gets AI IP risks right from day one.”

        Leave a Comment

        Comments

        No comments yet. Why don’t you start the discussion?

        Leave a Reply

        Your email address will not be published. Required fields are marked *