Ethical Innovations: Embracing Ethics in Technology

Ethical Innovations: Embracing Ethics in Technology

Menu

Canada’s Under-16 Social Media Ban Dilemma

Canada is considering whether to ban social media for children under 16, with support for possible age-based restrictions growing across federal and provincial politics as Australia’s new law is watched as an early test of how such a policy would work.

The federal Liberal party adopted a motion in Montreal to make 16 the age of majority for social media. Prime Minister Mark Carney’s government is examining a possible ban for minors as part of a broader online safety agenda. Conservative Leader Pierre Poilievre has not taken a final public position, but said his team is exploring the idea. Several Conservative MPs, including Brad Redekopp, Ted Falk, Scott Aitchison, and deputy leader Melissa Lantsman, signaled openness to restrictions, while saying the details and limits would matter.

At the provincial level, Manitoba Premier Wab Kinew said his province would pursue a youth social media ban and also restrict AI chatbots for minors, though no detailed plan had been released. Ontario Education Minister Paul Calandra said Ontario is considering stronger limits on cellphone use in schools and wants to work with the federal government on social media policy.

Australia’s law blocks children under 16 from platforms including Facebook, Instagram, TikTok, Snapchat, and X. Under that system, platforms are expected to keep underage users off their services, and companies can face fines of about $45 million Canadian for repeated breaches. Meta said it removed 550,000 accounts in the first days after the law took effect.

Early reporting from Australia has shown mixed results. Sabrina Caldwell of the University of New South Wales Canberra said some children appeared to be spending more time in person, being more active, and feeling less pressure from online life. A January YouGov survey presented at an Australian government and social research conference found that 61 per cent of parents said their children were more socially engaged, 38 per cent said family relationships improved, and 25 per cent said their children had less social connection.

Enforcement remains a major issue. A report from the Age Verification Providers Association said 9 of the 10 biggest platforms were not checking ages when users signed up. The group said many services still relied on users stating their age, even though stronger checks such as ID uploads, face checks, voice checks, or behaviour-based estimates were intended to help verify users. Parents said some children still appeared to be using platforms openly, and some young users were reported to be getting around controls with tools such as VPNs.

Children interviewed in Australia and Canada said being kept off social media could leave them out of group chats, plans, and daily contact with friends. Some said their peers mainly used apps such as Snapchat instead of regular texting. The reporting also said some children recognized risks including bullying and contact with strangers while also describing the social cost of being excluded.

Supporters of restrictions in Conservative circles said many parents viewed children’s time on phones and social platforms as harmful, and some compared possible limits to age restrictions on alcohol and cannabis. They also said the issue could appeal politically to parents, religious voters, and immigrant communities. Other Conservatives warned that a youth ban could become a path for broader Liberal control over online speech, reflecting party concerns about earlier Liberal online harms proposals and what they viewed as government overreach.

Liberal MPs also showed divisions over how far any policy should go. Canadian Heritage Minister Marc Miller said social media regulation falls under federal authority and said Canada needs to respond to serious online harms, while also indicating that any action would require coordination and would not necessarily stand alone. Liberal MPs Yasir Naqvi and Ben Carr pointed to both the addictive nature of platforms and the difficulty of enforcing an outright ban.

Critics and legal experts said a ban by itself might not solve the problem. Sonia Nijjar, a lawyer acting for 22 Ontario school boards in lawsuits against Meta, TikTok, and Snapchat, said children who are strongly drawn to these products may shift their use underground instead of stopping. Those lawsuits allege social media has disrupted learning and school life and seek safer platform design and compensation for school boards. The companies deny the claims and had not yet filed statements of defence.

Privacy and civil liberties concerns have also been raised. Critics said age verification could require measures such as facial recognition or government-issued digital identification. The broader debate now centers on whether Canada will adopt age-based social media limits, and if so, whether they can be enforced effectively without creating new privacy, speech, and access concerns for young people.

Public support in Canada appears strong. An Angus Reid Institute survey found that 75 per cent of respondents supported banning social media for children under 16. Other countries identified as considering similar measures include Austria, Denmark, France, Germany, the United Kingdom, Malaysia, and Indonesia.

Original Sources: 1, 2, 3, 4, 5, 6, 7, 8 (canada) (australia) (facebook) (instagram) (tiktok) (snapchat) (meta) (yougov) (canadian) (liberal) (montreal) (austria) (denmark) (france) (germany) (malaysia) (indonesia) (ontario) (manitoba) (vpns) (cyberbullying) (grooming) (lawsuits) (compensation) (enforcement)

Real Value Analysis

This article offers almost no direct action a normal reader can take soon. It reports a political and policy debate, along with some early claims from Australia, but it does not give readers steps, decisions, or tools they can use right away. A parent is not told how to evaluate whether a child is ready for social media, what safer alternatives exist, how to set household rules, how to respond if a child feels excluded, or what to do if a platform asks for age verification. A non-parent gets even less. There are no practical resources, no decision framework, and no clear next move. So on basic usefulness, the article offers little action to take.

Its educational value is moderate at best, but mostly shallow. It gives a surface overview of the debate by presenting both the hoped for benefit and the main objections. A reader learns that enforcement is difficult, that children may bypass controls, that some parents report improved behavior, and that others report social downsides. That is useful as a sketch. But the article does not go far enough to teach the issue well. It does not explain how age verification systems actually work, what their error rates or privacy tradeoffs might be in practice, what legal structure would be needed in Canada, or how a ban differs from platform design rules, school rules, or parental controls. The statistics are also underexplained. Numbers like 550,000 removed accounts, 61 per cent of parents reporting more social engagement, and 75 per cent public support sound important, but the article does not explain how those figures were gathered, what their limits are, or why they should change the reader’s judgment. It informs, but it does not teach enough.

Personal relevance is limited and uneven. For parents of children under 16, teachers, school administrators, and young teens themselves, the issue could become quite relevant because it may affect daily communication, school life, household rules, and online habits. For everyone else, the relevance is mostly indirect and political. The article does not connect the topic to concrete household decisions, financial consequences, legal responsibilities, or immediate safety steps. Even for the people most affected, it stays at the level of public debate rather than practical life. So the relevance exists, but only for a narrower group, and even there it is not made very usable.

The public service value is weak. There is no warning, no safety guidance, no explanation of what families should do while laws are unsettled, and no practical help for schools or caregivers already dealing with the problem. The article mainly recounts a controversy, some early observations, and political momentum. That can be legitimate reporting, but it is not strong public service journalism in a practical sense. It does not help readers act responsibly or reduce harm in their own lives.

There is very little practical advice to review because the article offers almost none. Its implied message is that bans may help but are hard to enforce, and that some children may benefit while others may feel isolated. That is not advice a normal person can follow. It leaves parents with the burden of interpretation but no framework for making decisions. Even the references to age checks and bypass methods are descriptive rather than useful. The article names problems without translating them into realistic household guidance.

Its long term value is limited. It may help readers remember that policy solutions can have tradeoffs, and that enforcement claims should be treated cautiously. But it does not provide a durable framework for judging future proposals, new technologies, or similar moral panics. It does not teach readers how to weigh child safety against autonomy, how to judge whether a policy is working, or how to prepare for changing digital norms. Once the current policy debate shifts, much of the article’s value fades.

Psychologically, the article may leave readers more uneasy than informed. It raises concerns about cyberbullying, grooming, weak enforcement, underground use, exclusion from peer groups, and growing political support for restrictions, but it does not offer a calm way to process these tensions. Parents may finish it feeling that online life is dangerous, bans may not work, and children may suffer either way. That can create frustration and helplessness rather than clarity. The tone is not highly alarmist, but it still presents a hard problem without giving readers a constructive response.

There are some signs of attention-oriented framing, though not extreme clickbait. Phrases such as “first major test,” “strong public message,” “online harms,” and references to large fines and large account removals add weight and urgency. Those choices are not necessarily wrong, but they do make the story feel more dramatic than practical. The article leans on tension between child safety and social exclusion, which is inherently compelling, but it does not match that tension with enough concrete guidance. So it is more gripping than useful.

The biggest missed chance is the lack of real-world guidance for families and ordinary readers. If the article wanted to help, it could have explained how to think about readiness instead of just age, how to distinguish between different types of platforms and risks, how to handle social exclusion if a child is not using the same apps as peers, and how to create rules that are flexible enough to work even if laws change. It also could have shown readers how to judge claims in stories like this. A sensible reader can compare whether multiple accounts repeat the same unsupported numbers, ask whether a statistic reflects behavior or only opinion, and separate symbolic policy claims from evidence that actual harm fell. Those habits would make future coverage more useful.

What the article failed to provide is a practical method for making decisions under uncertainty. A simple approach is to stop asking only whether social media is good or bad and instead ask four smaller questions. What are the main benefits for this child or household. What are the main risks. What controls are realistically enforceable at home. What signs would show the current approach is not working. That turns an abstract culture war into a manageable decision.

A useful household rule is to focus on function, not labels. Instead of debating “social media” in the abstract, look at what the app actually does. Does it allow strangers to contact a child. Does it push endless short form content. Does it encourage private disappearing messages. Does it make age lying easy. Does it create pressure to perform socially all day. A messaging app used with known family contacts is not the same as a public algorithmic platform. Treating all services as identical leads to weak decisions.

Another practical step is to build access in layers rather than using only two extremes, full access or total ban. A child can begin with devices used in shared spaces, shorter time windows, no overnight use, no private accounts, and a rule that adults can review settings together. If trust and judgment improve, access can expand. If secrecy, sleep loss, conflict, or compulsive use increase, access can narrow. That is more realistic than expecting one law or one birthday to solve readiness.

If social exclusion is the main concern, families can address that directly instead of surrendering to the platform by default. Encourage one or two alternative ways for close friends to stay in touch, such as regular texting, small group chats through lower risk tools, planned calls, or agreed meeting times. The point is not to recreate every social stream, but to preserve real connection. Many digital conflicts become easier when the family identifies the actual need, belonging, rather than assuming the only answer is access to the most popular app.

If safety is the main concern, the most reliable protection is not a rule alone but a repeatable habit of discussion. A child should know what to do if someone asks for secrecy, pressures for images, moves a conversation to another platform, flatters aggressively, threatens embarrassment, or creates urgency. The most protective household message is simple: you will not be punished for bringing a problem early. That makes disclosure more likely, which matters more than perfect prevention.

For interpreting future articles like this, separate three things. First, what is confirmed. Second, what is claimed by interested parties such as governments, companies, or advocacy groups. Third, what is still unknown. If a story gives large numbers without method, treat them as signals, not settled proof. If it describes emotional harms on both sides, ask who is speaking, how broad the sample is, and whether the article distinguishes short term discomfort from serious lasting harm. This keeps you from being pulled too quickly by whichever side sounds more worried.

A practical decision tool for parents is to watch outcomes rather than intentions. If a child’s digital life is linked to worse sleep, secrecy, school disruption, irritability, compulsive checking, or ongoing conflict, the current setup is probably too loose. If access is linked to manageable use, stable mood, normal sleep, open conversation, and continued offline interests, the setup may be working well enough for now. This is not perfect, but it is more useful than arguing from slogans.

It also helps to prepare for policy change without overreacting. If laws tighten, families should already know what matters most: how the child communicates with close friends, how school notices are received, which apps are essential and which are just habitual, and what fallback options exist. If a platform becomes restricted, the family should not be starting from zero. A simple backup plan for communication and routines prevents chaos.

The best real-world takeaway is that this article describes a live policy debate, not a ready-made solution for your life. Its main value is as an early signal that governments may impose stronger age rules. Its weakness is that it leaves readers with tradeoffs but no method. The practical response is to make decisions at the household level using observable behavior, gradual access, clear communication rules, and backup ways to stay connected. That gives a normal person something useful to do, which the article itself does not.

Bias analysis

“The main issue is whether a legal ban can reduce harm to children online, while still being enforced in a practical way.” This line frames the whole story around one policy tool, a ban, instead of a wider set of choices. That is a framing bias because it sets the reader to judge success mainly by harm reduction and enforcement, not by rights, tradeoffs, or other remedies. It helps the ban debate take center stage and hides other ways to deal with the problem. The wording is calm, but it still narrows the field of thought.

“Australia’s new law is being watched as the first major test of that idea.” The words “first major test” give the law extra weight and drama. This is a spotlighting word trick because it makes one case feel like the key proof point for everyone else. It pushes the reader to see Australia as a model case before the text shows whether the test is fair or complete. That can make later facts feel more decisive than they really are.

“Meta said it removed 550,000 accounts in the first days after the law took effect.” This uses a company claim without any proof or check in the same sentence. That is source-selection bias because the text gives a strong number that supports enforcement progress, but does not show how the number was checked or what kinds of accounts they were. It helps the idea that the law had a big early effect. The large number also works as a scale trick because it sounds impressive on its own.

“9 of the 10 biggest platforms are not checking ages when people sign up.” This quote uses a sharp number to create a sense of broad failure. That may be fair, but inside this text we are not told which platforms, what counts as checking, or how the group judged that. So it works as a likely misleading certainty cue. It helps the view that companies are not really enforcing the rules, while leaving key meaning undefined.

“many services still rely too much on users simply stating their age” The words “too much” are loaded but not measured. This is vague judgment language because it tells the reader there is a serious problem without giving a clear rule for how much is too much. It helps critics of weak checks. It also softens the need for proof by turning a measurable issue into a feeling word.

“stronger checks such as ID uploads, face checks, voice checks, or behaviour-based estimates were supposed to help keep children offline.” The phrase “were supposed to help” suggests failure without clearly saying who promised what or what standard was missed. That is a soft blame trick because it hints at broken expectations while hiding the actor. It also makes invasive tools sound normal by calling them “stronger checks.” That wording can hide how serious those methods are for privacy.

“some children still appear to be using platforms openly” The words “appear to be” are hedged, but they still plant the idea that the ban is being bypassed. This is speculation framed as observation. It helps the claim that enforcement is weak, while keeping enough vagueness to avoid a firm burden of proof. The wording leads the reader toward a conclusion without solid detail.

“some young users can get around controls with tools such as VPNs.” This line is selective because it points to one bypass method but gives no sense of how common it is. That is anecdotal framing. It helps the view that bans are easy to defeat. The example may be real, but the wording can make a limited problem feel general.

“some children seem to be spending more time with others in person, being more active, and feeling less pressure from online life.” The key word is “seem.” That is a caution word, but the sentence still stacks only good effects together in a pleasing way. This creates a positive emotional frame around the ban. It helps the policy by painting a healthier picture even though the claim is still uncertain inside the sentence.

“61 per cent of parents said their children were more socially engaged, while 38 per cent said family relationships improved.” This is a numbers framing trick because it gives support numbers without telling us sample limits, question wording, or possible bias in parent reporting. It helps the ban by making gains feel measured and solid. Parents are also speaking for children here, which can hide the child’s own view. The numbers sound exact, so they carry extra force.

“At the same time, 25 per cent said their children had less social connection.” This looks balanced, and it is partly so, but it still uses the same parent-report frame. That is a fake-neutral risk because both the good and bad effects are filtered through one kind of source. It hides whether children themselves described their lives the same way. The balance is real on the surface, but the source base stays narrow.

“being kept off social media can leave them out of group chats, plans, and daily contact with friends.” This line uses exclusion language that is vivid and personal. That is an emotional framing choice because it makes the cost of the ban feel close and social, not abstract. It helps the side warning about harm from exclusion. The wording is not false by itself, but it is chosen to make the reader feel the loss.

“some children understand the safety concerns, including bullying and contact with strangers, even while feeling cut off.” This sentence builds a contrast that makes children sound reasonable and aware, not reckless. That is sympathy framing. It helps children who oppose the ban by showing they accept the safety case but still suffer from the rule. The setup guides the reader to see their objection as thoughtful and human.

“the federal Liberal party adopted a motion in Montreal to make 16 the age of majority for social media.” The phrase “age of majority” is a word-meaning shift. It borrows a term usually tied to legal adulthood and applies it to one online activity. That can make the policy sound more settled and normal than “ban under 16” would sound. It helps supporters by giving the rule a more formal and less harsh label.

“Public opinion in Canada appears strongly supportive.” This is broad opinion framing. The word “strongly” adds force, but the text only gives one poll line after it. That helps build a sense of public backing beyond what is shown here. The word “appears” softens the claim, yet the sentence still nudges the reader toward seeing support as clear and large.

“An Angus Reid Institute survey found that 75 per cent of respondents backed banning social media for children under 16.” This is a statistic used to push social proof. A large share of support can make a policy feel right or inevitable, even though popularity does not prove wisdom or fairness. It helps the ban by showing a big majority. The text does not give the question wording, which matters a lot for a claim like this.

“children who are heavily drawn to these products may simply move their use underground instead of stopping.” The words “heavily drawn” frame some children as strongly pulled by the platforms, almost as if they cannot resist. That is subtle pathologizing language. It helps critics of the ban by suggesting the real problem is deeper than access rules. The line is careful with “may,” but it still guides the reader to picture hidden use as likely.

“Australia’s experience suggests that a ban can send a strong public message about protecting children from online harms” This is message framing, not just outcome framing. It says the ban matters because of the signal it sends, even if results are mixed. That helps the policy by making symbolic value sound important on its own. The phrase “online harms” is broad and serious, which adds moral weight without narrowing the exact harms in that sentence.

“some children may lose important social contact when they are blocked from the platforms their peers use most.” This is another emotional cost frame, but it is also a wording choice that hides scale. The phrase “important social contact” is strong and human, yet we are not told how often this happens or for whom. It helps critics of the ban by stressing what is lost. The use of “may” keeps the claim open while still planting concern.

Emotion Resonance Analysis

The text is driven most strongly by worry. That emotion appears from the start in phrases such as “reduce harm to children online,” “keep children offline,” “bullying,” “contact with strangers,” and “grooming.” These words carry clear danger. The strength of this emotion is high because the text keeps returning to threats against children, which are among the strongest emotional triggers in public debate. The purpose of this worry is to make the issue feel urgent and serious. It pushes the reader to see social media not just as entertainment, but as a possible source of harm that may need strong control.

Fear is a close partner to that worry, but it is more pointed. It appears in the idea that children may be exposed to “cyberbullying and grooming,” and in the suggestion that platforms are failing to protect them. Fear also appears in the mention of children using platforms “openly” despite the law and getting around controls with VPNs. This makes the threat feel hard to contain. The fear is fairly strong because it is tied to the image of adults losing control over a dangerous space. Its purpose is to support the case for legal action by making inaction seem risky.

The text also uses frustration. That emotion appears in lines showing that enforcement is not working smoothly, such as “9 of the 10 biggest platforms are not checking ages,” “many services still rely too much on users simply stating their age,” and “some young users can get around controls.” These phrases suggest failure, weakness, and delay. The feeling is moderate to strong because the article does not explode with anger, but it clearly presents a system that is not doing what it promised. The purpose of this frustration is to make readers dissatisfied with current platform behavior and more open to stronger rules.

A quieter emotion in the text is hope. It appears in the passage saying “some children seem to be spending more time with others in person, being more active, and feeling less pressure from online life.” It also appears in the survey result that “61 per cent of parents said their children were more socially engaged” and that family relationships improved for some. This hope is moderate in strength because the wording is careful and uncertain, using phrases such as “some early signs” and “seem.” Still, the purpose is clear. It gives readers a positive image of what life might look like if social media use falls. That softens the harshness of a ban and makes it easier to accept.

Relief is also present in that same section. The idea that children may feel “less pressure from online life” suggests release from stress. The emotion is mild to moderate, but it matters because it shows social media not only as harmful in a dramatic sense, but also as a source of daily strain. This relief helps support the ban by making it sound like protection from a burden, not just a punishment.

Sadness appears in the lines about exclusion. The text says children can be “left out of group chats, plans, and daily contact with friends,” and that those without accounts “feel excluded.” These are emotionally heavy phrases because they touch on loneliness and being cut off from others. The strength of this sadness is moderate to strong. It is easy for a reader to picture a child missing out on friendship and belonging. The purpose is to show the human cost of a ban. This makes the article feel more balanced, but it also deepens the emotional pull by showing that harm may exist on both sides.

A related feeling is isolation. While sadness describes the pain, isolation describes the condition itself. It appears in “cut off” and “lose important social contact.” This emotion is strong because social connection is one of the most basic needs in childhood. Its purpose is to make the reader pause before treating a ban as a simple fix. It gives emotional force to the argument that safety policies can also create loss.

The text also carries sympathy for children. That sympathy appears when children are shown as understanding the danger “even while feeling cut off.” This matters because it presents them as thoughtful and aware, not careless or foolish. The emotion is moderate, and its purpose is to help the reader see children as people affected by the policy, not just as subjects to be controlled. This can shift the reader from a purely protective mindset toward a more mixed and humane response.

Concern about power and control also appears in the text. The law is described with phrases such as “block children under 16,” “keep underage users off,” and fines of “about $45 million Canadian.” These phrases create a sense of force and authority. The emotion here is not exactly fear alone. It is also unease about hard enforcement and the scale of the state and corporate response. The strength is moderate. Its purpose is to show that this is not a small rule, but a major intervention with real consequences.

The article also uses public confidence and social approval. This appears in “political support... is growing” and “Public opinion in Canada appears strongly supportive.” The survey figure of “75 per cent” adds to that feeling. This emotion is mild, but it is persuasive. It creates the sense that support for a ban is normal, common, and socially accepted. Its purpose is to build trust in the policy by showing that many people back it. This can make readers more willing to agree because the measure feels mainstream rather than extreme.

Moral seriousness runs through the whole text. Words like “protecting children,” “harm,” “safety concerns,” and “online harms” give the debate a moral tone. This is not one sharp emotion, but a steady emotional frame. The strength is high because the article keeps the focus on right and wrong, care and neglect, safety and risk. Its purpose is to make the issue feel larger than a normal policy disagreement. It becomes a question of duty.

These emotions guide the reader in several ways. Worry and fear push the reader toward caution and make legal action seem easier to justify. Hope and relief offer a reward image, suggesting that less social media could lead to healthier, calmer lives. Sadness and isolation complicate that picture by making the ban feel costly for some children. Sympathy helps the reader care about those children rather than dismiss them. Frustration with weak enforcement encourages the belief that platforms cannot be trusted to manage the issue on their own. Public support adds a sense of legitimacy. Together, these emotions do not simply describe the issue. They lead the reader through a controlled emotional path: first danger, then possible solution, then social cost, then renewed pressure for action.

The writer uses emotion to persuade mainly through word choice. Neutral terms could have been used, but the text chooses emotionally charged ones. “Harm,” “bullying,” “grooming,” and “contact with strangers” are stronger than more general terms such as “risk” or “negative experiences.” “Left out,” “cut off,” and “excluded” are stronger than saying children may “have fewer online interactions.” These choices make the issue feel personal and immediate. They help turn policy into lived experience.

The text also increases emotional force by placing positive and negative effects side by side. It says some children may be “more socially engaged” and “more active,” but also that some have “less social connection.” This contrast sharpens the emotional effect because the reader is asked to weigh two different kinds of harm and two different kinds of hope. The result is not emotional neutrality. It is emotional tension, which keeps the reader engaged and makes the issue feel difficult and important.

Another persuasive tool is the use of children and parents as human voices. Even though the piece includes political leaders and experts, the most emotional passages are about what children feel and what parents report. This gives the issue a family setting rather than a purely legal one. Family-centered language is powerful because it invites care, protection, and emotional identification. It also makes the policy feel close to ordinary life.

The text uses numbers to strengthen emotion as well as reason. The figure of “550,000 accounts” makes enforcement sound large. The claim that “9 of the 10 biggest platforms” are not checking ages makes failure sound widespread. The survey numbers from parents and the “75 per cent” support figure make reactions feel measurable and real. These numbers do more than inform. They give emotional claims the appearance of firmness. That can increase trust and make concern or support seem justified.

There is also a repeated pattern of threat followed by response. The text returns again and again to a structure in which children face online harm, authorities respond with bans or restrictions, and critics warn that the response may not fully work. This repetition keeps the reader focused on danger and control. It prevents the issue from drifting into a neutral debate about technology and instead holds attention on safety, enforcement, and loss.

The article does not rely on dramatic stories about one named child, but it still uses a kind of shared personal story. The reader is asked to imagine children missing group chats, feeling pressure online, being more active offline, or being exposed to strangers. These scenes are simple and easy to picture. That imagined experience gives the text emotional power without needing a long narrative.

The overall emotional design of the piece is carefully mixed. It does not use only fear, because fear alone could make the policy seem harsh or desperate. It adds hope, relief, sympathy, and sadness so the issue feels complex but still urgent. This balance helps the writing persuade more effectively. It makes the reader feel that the problem is real, that action may be needed, but that action also carries a human cost. That emotional balance gives the article credibility while still steering attention toward the need for stronger control of children’s social media use.

Cookie settings
X
This site uses cookies to offer you a better browsing experience.
You can accept them all, or choose the kinds of cookies you are happy to allow.
Privacy settings
Choose which cookies you wish to allow while you browse this website. Please note that some cookies cannot be turned off, because without them the website would not function.
Essential
To prevent spam this site uses Google Recaptcha in its contact forms.

This site may also use cookies for ecommerce and payment systems which are essential for the website to function properly.
Google Services
This site uses cookies from Google to access data such as the pages you visit and your IP address. Google services on this website may include:

- Google Maps
Data Driven
This site may use cookies to record visitor behavior, monitor ad conversions, and create audiences, including from:

- Google Analytics
- Google Ads conversion tracking
- Facebook (Meta Pixel)