Police Warn Parents About iPhone’s New ‘NameDrop’ Feature

Police Warn Parents About iPhone’s New ‘NameDrop’ Feature
The Apple logo on a window of the company's store in Bangkok on March 5, 2021. (Mladen Antonov/AFP via Getty Images)

Multiple U.S. police departments issued alerts about a new iPhone feature that allows sharing contact info and images wirelessly between two closely held devices, warning that the feature could pose a risk to children and other vulnerable individuals.

“If you have an iPhone and have done the recent iOS 17 update, they’ve installed a feature called NameDrop. This feature allows you to easily share contact information and photos to another iPhone by just holding the phones close together,” the Middletown Division of Police, Ohio, said in a Nov. 26 Facebook post. “PARENTS: Don’t forget to change these settings on your child’s phone to help keep them safe as well!”

“With this feature enabled, anyone can place their phone next to yours (or your child’s phone) and automatically receive their contact information to include their picture, phone number, email address, and more, with a tap of your unlocked screen,” the Watertown CT Police Department, Connecticut, said in another post.

The NameDrop option is turned on by default. In order to cancel the feature in iOS 17:

  • Open the “Settings” option
  • Tap the “General” option
  • Tap the “AirDrop” tab, which controls the file-sharing features of the phone
  • Once AirDrop is selected, turn off the “Bringing Devices Together” option to disable NameDrop.

Warnings about NameDrop were issued by the Oakland County Sheriff’s Office, Michigan, and the Greenville County Sheriff’s Office, South Carolina.

In its warning, the Greenville office pointed out that “the only way your contact information will be shared is if you and the other person hold your phones very close to each other, unlock them both, and then accept the swap.”

“There is no way for anyone to get your information without it first popping up on your screen and you or them physically tapping the ‘accept’ prompt.”

The risk posed by the feature is something that can “easily be mistaken or looked past by elderly, children, or other vulnerable individuals,” it said.

Children’s Safety Risks

Speaking to CBS 12, Amir Sachs, a cybersecurity expert with Blue Light IT, pointed out that police warnings are largely owing to concerns about the safety of children.

“I guess if a predator wanted to come in and managed to put the phone near a kid’s phone the fear is that the kid’s details will be transferred to the predator’s phone,” he said. “But I don’t think the fear is based on reality.”

Mr. Sachs pointed out that the two phones need to be within around an inch of each other for the NameDrop feature to work.

When two iPhones are placed together, NameDrop offers users two options: “Receive Only” or “Share.”

“Receive Only” means that a user will only receive information transmitted by the other individual. No information about the user will be sent to the other person. Choosing “Share” allows the user to send their information to another individual.

There are security concerns about accidentally allowing NameDrop to transmit information to strangers.

“Yes, we know that it allows you to share it and you can refuse but many people do not check their settings and realize how their phone works,” the Oakland County Sheriff’s Office said in its warning.

NameDrop is used to quickly share contact information with a large number of people. For example, a person at a meeting can add multiple contacts to their phone in a matter of seconds rather than having to individually type and save each contact.

Image Manipulation Risks

The warning from police departments comes as concerns about children’s digital safety are rising. Children who share their personal info, specifically pictures, with strangers can put themselves at risk of exploitation.

In a recent statement, the FBI warned that sending images to strangers can result in “malicious actors” using content manipulation technologies and services to generate “sexually-themed images that appear true-to-life in likeness to a victim, then circulate them on social media, public forums, or pornographic websites.”

Yaron Litwin, a digital safety expert, told The Epoch Times that “one of our recommendations is to be a little more cautious with images that are being posted online and really try to keep those within closed networks, where there are only people that you know.”

Given the risks posed by AI image manipulation, a coalition of attorneys general from 50 U.S. States and territories urged Congress to study how the technology can be used to generate child sexual abuse material (CSAM) and to implement laws to prosecute such crimes.

In a Sept. 5 letter to congressional leaders, the attorneys general warned that AI can create deepfakes of children, subjecting them to potential exploitation.

“Whether the children in the source photographs for deepfakes are physically abused or not, creation and circulation of sexualized images depicting actual children threatens the physical, psychological, and emotional wellbeing of the children who are victimized by it, as well as that of their parents.”

According to a report by the nonprofit National Center for Missing and Exploited Children, its tipline received more than 49 million reports of CSAM images last year, up from 33 million in 2020.

Masooma Haq contributed to the report.

From The Epoch Times