Juul Survives a Blow From the FDA—for Now
Plus: Instagram cracks down on age verification, Microsoft says it will stop using AI to track emotions, and Twitter wants to…
Can you Currently buy a Juul e-cigarette? That depends on what day of the week it is.
Earlier this week the FDA denied marketing authorization for Juul, which first started selling its e-cigarettes in 2015 (though it has operated under various company names since 2007). The FDA said the reason for the denial was that Juul “failed to provide sufficient toxicology data to demonstrate the products were safe,” ArsTechnica reports, and as such the agency could not complete its toxicology assessment. The FDA specifically pointed to “potentially harmful chemicals leaching from the company’s proprietary e-liquid pods” as a concern.
However, Juul pushed back—and secured a temporary win. In a court filing submitted to the US Court of Appeals for the DC Circuit, Juul called the FDA ban “arbitrary and capricious” and suggested the agency was succumbing to pressure from Congress. The federal appeals court then decided to block the FDA order, until it can hear more arguments on the issue.
The FDA’s denial and the subsequent stay are just the latest developments in a years-long battle between regulators and Juul. Back in 2018, the FDA launched an investigation into sales of Juul products to underage consumers, requested marketing materials from the company, and demanded that the company submit a plan for thwarting sales to teens. The following year, the FDA sent a warning letter to Juul over its claims that vapes were less harmful than traditional cigarettes. At some point, fruity-flavored e-cigarette pods were banned in the US.
The latest ban, if it ever goes into effect, would apply to the Juul device itself, a sleek vaping pen, and to four specific liquid cartridges, all of which are tobacco-flavored or menthol-flavored—those that mimic the flavors of traditional cigarettes. The denial from the FDA came just a couple days after the agency said it would also limit the amount of nicotine allowed in real cigarettes sold in the US.
Here’s some more news.
Instagram’s Age Crackdown
On Thursday, Instagram announced that it will introduce new tools for verifying users’ ages on the platform. When a user changes their birth date to put them over or under the age of 18, Instagram will now require them to verify the change. This means either uploading an ID, getting mutual friends to vouch for you, or uploading a selfie video. The latter option is being offered through partnership with the digital recognition company Yoti, which then scans the video selfie with its facial recognition tech to estimate the person’s age.
Instagram says its goal is to tailor the app differently for teens and adults, and ensure that those experiences are distinct. Despite those stated noble intentions, the move still makes privacy and AI experts nervous. After all, Instagram’s parent company, Meta, has a long history of data-sharing and privacy lapses.
For now, Instagram is testing the age verification requirement only with users in the US.
Microsoft Ditches Controversial Emotion-Detecting AI
On Tuesday, The New York Times reported that Microsoft will remove features from its Azure cloud computing platform that use facial recognition software to track the physical attributes and even emotions of people in images. It’s been a controversial feature, criticized for its potential to be both biased and inaccurate.
Microsoft’s no stranger to dubious ethical situations. In 2018 it came under fire for using the Azure platform to work with ICE, the US Immigration and Customs Enforcement program. But now, Microsoft seems eager to get out ahead of the criticism. The move to reign in Azure came as part of Microsoft’s newly released Responsible AI Standard, a document it says will guide the way the company uses AI in its products.