Ethical Innovations: Embracing Ethics in Technology

Ethical Innovations: Embracing Ethics in Technology

Menu

Actor Sues to Block Government Using Her AI Likeness

Anila Bisha, an Albanian actress, has filed a legal petition asking an administrative court to immediately halt the government’s use of her face and voice in a government-created artificial intelligence avatar called “Diella,” which was presented by Prime Minister Edi Rama as an AI minister to assist with public procurement. Bisha’s filing requests a temporary injunction to suspend use of her likeness and seeks to later bring a formal lawsuit seeking damages and compensation.

Bisha acknowledges signing a contract that allowed her image and voice to be used on the government’s e-Albania online services platform, including through the end of 2025 according to her account, but says she was not told those recordings would be repurposed as a ministerial AI avatar and that she was not consulted or offered a new contract when the character was elevated to ministerial status. Her legal team says the government’s use of the materials removed her control over her image and voice and that a National Agency for Information Society patent application on the avatar harmed her ability to work; those assertions are presented in the filing.

The filing says Bisha sought meetings with government officials after the AI minister’s unveiling and received no response, prompting the court petition. The government did not provide an immediate comment to requests for one. Court documents state the plaintiff argues continued use and any patenting of the avatar cause immediate and irreparable harm; the court has not yet ruled on the suspension request.

Diella was shown in a computer-generated video, portrayed wearing traditional national dress, and was presented as a tool intended to manage public procurement and reduce corruption; reports also noted that versions of the avatar have been described as producing over 80 AI “children,” each intended to assist a member of parliament. The case raises questions about consent, reuse of biometric and voice data, transparency, and accountability in government AI deployments.

Original Sources: 1, 2, 3, 4, 5, 6, 7, 8 (diella) (albania) (likeness) (lawsuit) (entitlement) (outrage)

Real Value Analysis

Actionable information: The article does not give a reader clear steps they can use immediately. It reports that Anila Bisha has filed an injunction and plans a lawsuit over the government’s use of her face and voice for an AI “minister,” and it notes she had previously signed a contract for use on an online platform. Beyond that factual account, the piece offers no practical instructions for someone who wants to respond to or emulate her actions: it does not explain how to file a similar complaint, how to read or challenge contracts, how to request takedowns, or what legal remedies are available in Albania or elsewhere. If you are a reader wondering what to do in the same situation, the article gives no procedural guidance, forms, contacts, timelines, or realistic next steps to follow.

Educational depth: The article stays at the level of surface facts and context. It names the person involved, the technology (an AI virtual minister), and notes prior controversies over AI likenesses, but it does not explain the legal principles that might apply (for example, personality rights, consent scope, or contractual interpretation), the technical way an AI could reuse someone’s face and voice, or the governmental policies governing the e-Albania platform. There are no numbers, charts, or detailed explanations of causation or systems. Readers do not come away with an understanding of why this kind of dispute arises, how likeness is typically protected in different jurisdictions, or how AI models are trained and deployed in ways that implicate consent.

Personal relevance: For most readers the story is of limited direct relevance. It is most relevant to actors, public figures, content creators, or anyone whose image or voice might be repurposed by government or private organizations. For the general public it is primarily a news item about a single legal action in Albania rather than information that affects personal safety, finances, or health. The article does not draw out broader implications for citizens’ interactions with government AI tools, or how ordinary users should adjust behavior or expectations.

Public service function: The article largely recounts an event without offering safety guidance, consumer advice, or emergency information. It does not warn citizens about possible misuse of likeness or give steps to protect personal rights. As a public service story it is weak: it reports a controversy but provides little context that would help others act responsibly or understand how to respond to similar deployments of AI by authorities.

Practical advice: The piece includes no actionable practical advice that an average reader can follow. It does not, for example, suggest how someone might check whether their image or voice has been used, how to contact authorities or platform operators, or how to preserve evidence if they suspect misuse. Any guidance a reader might need to protect their rights is absent.

Long-term impact: The article focuses on a single, ongoing legal dispute and does not provide guidance that would help readers plan ahead or change behavior to avoid similar problems. It does not examine policy implications, possible regulatory responses, or long-term strategies for protecting likeness and voice against AI reuse. As written, it leaves readers without tools to avoid or mitigate similar risks in the future.

Emotional and psychological impact: The report is relatively restrained and factual; it may provoke concern among people who value control over their image and voice, but it does not offer reassurance or constructive next steps. Because no remedies or guidance are given, readers who identify with the actor’s complaint might feel alarmed or helpless without knowing what they can do.

Clickbait or sensationalism: The article is not overtly sensationalistic; it reports a newsworthy legal move. However, the report uses the dramatic element of a government-created “AI minister” using a real person’s likeness without extending the story into useful context. That lack of explanatory depth gives the headline more punch than the content supports and misses an opportunity to substantively inform readers.

Missed chances to teach or guide: The article misses multiple teaching opportunities. It could have explained the typical scope and limits of consent in voice and image contracts, what standard contractual clauses to watch for, how courts have treated AI reuse of likeness elsewhere, how to document and preserve evidence of misuse, or how to contact platform operators and regulators. It also could have provided basic technical context about how AI systems repurpose voice and face data and what transparency measures to demand from public-sector AI deployments. The absence of those elements is a clear missed chance.

Practical, realistic guidance readers can use now

If you are worried about your image or voice being reused by an AI, first check any contracts or terms you signed. Read any clause about scope, duration, geographic limits, and specific permitted uses; if the language is vague about “platform use” or “public services,” treat that as potentially broad and consider getting a lawyer to interpret it. Preserve evidence: save copies of the contract, screenshots or recordings of the AI using your likeness, timestamps, and any communications with the platform or organization. That documentation will be essential if you later need to pursue a takedown or legal claim.

Contact the organization that deployed the AI and make a clear written request to stop using your likeness, describing exactly where and how you appear and asking for removal or disabling of the relevant asset. Keep copies of all correspondence and note response times; a prompt, documented request can support interim measures. If you can’t reach a satisfactory resolution, consult a lawyer experienced in personality rights, copyright, or contract law in your jurisdiction; even an initial consultation can tell you whether you have grounds for an injunction or damages claim.

If you cannot immediately secure legal help, publicly documenting the issue in a factual way can create pressure on the deployer and generate witnesses, but be mindful of defamation and privacy risks. For creators who want to avoid future problems, negotiate explicit, narrow licensing terms that specify permitted platforms, uses, timeframes, and whether your likeness may be used in derivative technologies such as AI. Retain the right to revoke or renegotiate for significant new uses, and seek compensation tied to novel or commercial deployments.

For the general public, demand transparency about public-sector AI: ask officials which data sources and consent processes were used, whether audits were performed, and which recourse citizens have if they object. When interacting with government platforms, treat consent forms carefully: if a form is vague about “public services” or “future uses,” request clarification before agreeing. These steps are practical and do not require technical expertise, but they make it more likely that you can spot and respond to inappropriate reuse of your likeness.

Bias analysis

"An Albanian actor has taken legal action to stop the government from using her face and voice for a government-created artificial intelligence minister." This sentence frames the actor as taking action and the government as the actor being opposed. The order favors the actor’s perspective by leading with her action and motive, which helps the actor’s side and sets the reader to view the government as the respondent rather than explaining the government’s intent first.

"Anila Bisha, filed a request with an administrative court demanding that the government immediately cease using her image and voice for the virtual minister called Diella and seeks a temporary injunction." The word "demanding" is strong and frames her request as forceful. That word choice pushes an emotional view of her behavior and can make her seem aggressive rather than formal or procedural, which helps portray her as combative.

"Bisha’s legal team says this is the first step to prevent what they describe as misuse of her likeness, and representatives plan to file a formal lawsuit seeking damages and compensation." The phrase "what they describe as misuse" distances the text from the claim by putting it in the lawyers’ words, which is neutralizing language. It softens the allegation and signals uncertainty, helping readers treat the claim as subjective rather than a stated fact.

"The actor previously signed a contract allowing her image and voice to be used on the government’s e-Albania online platform, but she says she was not told those elements would be applied to an AI minister and that she regards the use as a political statement." The clause "but she says" separates her claim from fact and introduces doubt about her statement. That phrasing privileges the factual contract while treating her interpretation as opinion, which helps the government’s position by foregrounding the contract.

"The Albanian government did not provide a comment when contacted." This is a succinct passive-style report that states non-response. It omits context such as when or how they were contacted, which hides information that could affect how the lack of comment is seen. The lack of detail can lead readers to infer government avoidance without evidence.

"The AI minister was introduced by Prime Minister Edi Rama and was presented as a tool intended to manage public procurement processes." The phrase "was presented as" introduces distance and suggests that the stated purpose might not be the whole story. That wording can seed doubt about the stated function and implies potential political motives, favoring skepticism about the government claim.

"Previous controversies over AI voices and likenesses were noted as a point of context in public discussion." The phrase "were noted" is vague and passive; it signals controversy without naming specifics. That vagueness evokes concern about AI likeness issues but hides details and sources, which nudges readers toward seeing the situation as part of a larger problematic pattern without evidence in the text.

Emotion Resonance Analysis

The text conveys a mixture of concern, frustration, defensiveness, and a hint of indignation centered on the actor’s response to how her likeness and voice are being used. Concern appears in the actor’s legal action and the request for an immediate stop and temporary injunction; words like “filed a request,” “immediately cease,” and “seeks a temporary injunction” signal a serious worry about harm or misuse. The strength of this concern is high because formal legal steps are described, which communicates urgency and perceived risk. This concern guides the reader to view the situation as important and potentially harmful, encouraging attention and sympathy for the actor’s position. Frustration and defensiveness are expressed where the actor “says she was not told” about the AI minister use and “regards the use as a political statement.” These phrases show irritation at a perceived lack of consent and a need to protect reputation; the intensity is moderate to strong because an existing contract is mentioned but the actor claims it did not cover this specific use. This frustration steers the reader toward questioning the government’s transparency and fairness. Indignation or moral objection is present in the claim that the use is a “political statement”; labeling the use this way carries moral judgment and implies misuse beyond a neutral technical application. The strength of indignation is moderate, and it serves to frame the government’s action as ethically problematic, nudging readers to side with the actor’s objections. There is also a cautious assertiveness from the legal team’s description of this move as “the first step” and the plan to file a formal lawsuit for “damages and compensation.” This language conveys determination and forward action; its strength is measured but purposeful, indicating deliberate escalation rather than an emotional outburst. That assertiveness encourages readers to see the actor as taking persistent, structured steps to remedy perceived wrongs. A subtle tone of skepticism or withholding appears in the note that “the Albanian government did not provide a comment when contacted.” The absence of comment creates a sense of incompleteness and gently prompts doubt about the government’s willingness to respond; the emotional weight is mild but effective in nudging the reader to question official accountability. Finally, the mention of “previous controversies over AI voices and likenesses” evokes a background of apprehension and mistrust toward similar technologies; the strength is low to moderate but it places the current story in a wider context that can amplify concern and caution among readers. Overall, these emotions direct the reader toward sympathy for the actor, wariness of the government’s actions, and interest in legal and ethical resolution. The writer uses specific verbs and phrases tied to legal process and consent to increase the emotional pull, choosing words with urgency (“immediately cease,” “temporary injunction”) and moral framing (“political statement,” “misuse of her likeness”) instead of neutral descriptions. Repetition of legal steps (request, temporary injunction, formal lawsuit) reinforces the seriousness and persistence of the actor’s response. The contrast between a previously signed contract and the actor’s claim of not being informed about this particular use creates tension and highlights a perceived breach of trust, making the situation feel more dramatic. Mentioning prior controversies subtly links this case to a pattern, making the possible harm seem broader than a single incident. These writing choices heighten emotional impact by emphasizing urgency, perceived injustice, and continued action, guiding readers to view the situation as ethically charged and deserving of attention.

Cookie settings
X
This site uses cookies to offer you a better browsing experience.
You can accept them all, or choose the kinds of cookies you are happy to allow.
Privacy settings
Choose which cookies you wish to allow while you browse this website. Please note that some cookies cannot be turned off, because without them the website would not function.
Essential
To prevent spam this site uses Google Recaptcha in its contact forms.

This site may also use cookies for ecommerce and payment systems which are essential for the website to function properly.
Google Services
This site uses cookies from Google to access data such as the pages you visit and your IP address. Google services on this website may include:

- Google Maps
Data Driven
This site may use cookies to record visitor behavior, monitor ad conversions, and create audiences, including from:

- Google Analytics
- Google Ads conversion tracking
- Facebook (Meta Pixel)