Tag Archives: AI

Featured Article : AI Content Legal Challenges

Following a copyright lawsuit against an AI code generator and industry questions about who actually owns images made by AI text-to-image generators, we look at the legal issues (and others) surrounding generative AI.

The Issue 

The recent lawsuit and questions from coders, artists, musicians, and other creatives show that the challenge is that there is currently a lack of clarity around issues of ownership relating to the output of AI content generating tools. There are many issues at the heart of the whole generative AI area, including:

– AI tools that generate images, code, text, and music are relatively new and how and what they produce hasn’t yet been subject to much legal scrutiny.

– AI content generating tools are built using with algorithms that have trained on previous work produced by humans and, once again, need more scrutiny.

– As noted by visual artists, the legality and ethics of AI that incorporates existing work needs to be examined. Also, AI art tools that have been trained on work by specific artists can copy their style in the images they produce. This could have a negative impact on the artist’s income.

– It is not clear exactly who owns an image or other piece of content that generative IT tools produce. For example, is it the owner of the AI that trains the model, or the human that prompts the AI with words?

The Lawsuit – Who Owns AI Generated Code?

The recent class-action lawsuit filed in California was focused on an AI tool called GitHub Copilot which automatically writes working code as the programmer types. The coder who filed the lawsuit argued that the code-writing tool may be infringing copyright because it doesn’t provide any attribution for the open-source code it reproduces. Some open-source code, for example, is covered by a license that requires attribution.

It should be noted that GitHub’s CEO has now said that Copilot now has a feature that can be enabled to prevent copying from existing code.

DALL-E Prompts Questions About Copyright And Ownership Of AI Generated Images 

Another recent example of generative AI that has prompted industry questions relating to copyright and ownership is OpenAI’s DALL ·E tool. DALL·E 2 is an AI system that can create realistic images and art from a description in natural language using a process it calls “diffusion” (see: https://openai.com/dall-e-2/). Although for a subscription, users are given full usage rights to reprint, sell and merchandise the images they create with the tool, creative professionals have been asking questions about generative AI ownership issues like the ones mentioned above.

Other Examples Of Generative AI Tools 

GitHub Copilot and DALL·E are by no means the only AI generative tools available. Others (and there are many more) include:

– Images (text-to-image) – Starryai, Craiyon, and NightCaf.

– Video (text-to-video) – Synthesia, Lumen5, and Elai.

– Design – Khroma, Designs.ai, and Uizard.

– Audio (text-to-speech voice generators) – Replica, Speechify, and Play.ht.

– Music -AIVA, Jukebox, and Soundraw.

– Text – Jasper.ai, Peppertype, and Copy.ai

– Code (text-to-code) – Tabnine, PyCharm, and Kite.

Copyright Law 

Up until now, the Internet has created a challenging area to keep track of legally, nevertheless some basic copyright rules apply. That said, so much digital (and non-digital) work is continuously created that there is no one copyright register in the UK for the online world. Instead, the law simply states that a person automatically enjoys copyright protection when they create something, e.g. original literary, dramatic, musical, and artistic work (including illustration and photography). This automatic ownership also applies to creating original non-literary written work, such as software, web content and databases.

If a person has copyright protection in the UK, it should mean that nobody else can copy, distribute (paid or free), rent, or lend copies of that work, make an adaptation of the work, or put that work on the Internet. However, AI content generating tools are blurring those lines and raising new ownership questions.

Fair Use 

Some legal and tech commentators have pointed to the possible importance and relevance of US copyright ‘fair use’ in making decisions about (for example) the output of text-to-image generators. For example, in Google LLC v. Oracle America, Inc (2021), it was decided that Google’s use of Oracle’s code was ‘fair use’, and the focus of the decision wasn’t whether the material copied was protected by copyright.

What Does This Mean For Your Business? 

This is a relatively new area where, as with so much of AI, the technology and its usage appear to be advancing faster than regulation and laws. This is generating more questions than clear answers, thereby creating uncertainty. For creatives such as musicians and artists, generative AI could be a threat, e.g. copying their style or work, as well as an opportunity.

For coders too, generative AI tools could represent a threat although, as with GitHub’s CoPilot, features could be added to the tools to lessen the threat. However, generative AI is a growing and lucrative market with the potential to step on many toes, hence the inevitable lawsuits. Users of generative AI services may also have doubts about the absolute legality of what they produce and publish using generative AI services, e.g. it may not always be clear whether AI-produced text for blogs contains copied material or is even factually accurate.

It appears, however, that the courts in each country will be the way that disputes about infringements by generative AI are decided and settled. Generative AI tool producers will need to keep a very close eye on how their algorithms work and the legal outcomes and implications of various cases as they are decided. For businesses using generative AI tools (e.g. to create images or other content), it undoubtedly meets a need in a new and innovative way, can save time, add value, and be a source of new strengths and opportunities. For the large, well-established photo/image retailers, these tools may currently represent a threat so it remains to be seen how markets such as this react.

Tech News : EU To Ban “Unacceptable” Use of AI

Following last week’s leak of proposed new rules about the use of AI systems, The European Commission looks likely to ban some “unacceptable” usage of AI in Europe.

The Leak and the Letter

This latest announcement that the European Commission aims to ban “AI systems considered a clear threat to the safety, livelihoods and rights of people” (and thereby “unacceptable”) follows the ‘leak’ last week of the proposed new rules to govern the use of AI (particularly for biometric surveillance) and a letter for 40 MEPs calling for a ban on the use of facial recognition and other types of biometric surveillance in public places.

Latest

This latest round of announcements about the proposed new AI rules by the EC highlights how the rules will follow a risk-based approach, will apply across all EU Member States, and are based on a future-proof definition of AI.

Risk-Based

The European Commission’s new rules will class “unacceptable” risk as “AI systems considered a clear threat to the safety, livelihoods and rights of people”. Examples of unacceptable risks include “AI systems or applications that manipulate human behaviour to circumvent users’ free will (e.g. toys using voice assistance encouraging dangerous behaviour of minors) and systems that allow ‘social scoring’ by governments.”

High Risk – Remote Biometric Identification Systems

According to the new proposed rules, high-risk AI systems include law enforcement, critical infrastructures and migration, asylum, and border control management.  The EC says that these (and other high-risk AI systems) will be subject to strict obligations, especially “all remote biometric identification systems” which will only have “narrow exceptions” including searching for a missing child, preventing an imminent terrorist threat, or finding and identifying a perpetrator or suspect of a serious criminal offence.

Other Risk Categories

The other risk categories for citizens covered in the proposed new EC AI rules include limited risk (chatbots), and minimal risk (AI-enabled video games or spam filters).

Governance

Supervision of the new rules looks likely to be the responsibility of whichever market surveillance authority each nation sees as competent enough, and a European Artificial Intelligence Board will be set up to facilitate their implementation and drive the development of standards for AI.

It is understood that the rules will apply both inside and outside the EU if an AI system is available in the EU or if its use affects people who are located in the EU.

What Does This Mean For Your Business?

AI is now being incorporated in so many systems and services across Europe that there is clearly a need for rules and legislation to keep up with technology rollout to protect citizens from its risks and threats. Mass, public biometric surveillance such as facial recognition systems is an obvious area of concern, as highlighted by its monitoring by privacy groups (e.g. Big Brother Watch) and by the recent letter calling for a ban by 40 MEPs. These proposed new rules, however, are designed to cover the many different uses of AI including low and minimal risk uses with the stated intention of making Europe a “global hub for trustworthy Artificial Intelligence (AI)”. If the rules can be enforced successfully, this will not only provide some protection for citizens but will also help businesses and their customers by providing guidance to ensure that any AI-based systems are used in a responsible and compliant way.