Apple, Microsoft and Google are heralding a new era of what they describe as artificially intelligent smartphones and computers. The devices, they say, will automate tasks like editing photos and wishing a friend a happy birthday.
But for this to work, these companies need something from you: more data.
In this new example, your Windows PC will take a screenshot of everything you do every few seconds. An iPhone will combine information across many apps you use. And an Android phone can listen to a call in real time to alert you to a scam.
Are you willing to share this information?
This change has significant implications for our privacy. To deliver the new personalized services, companies and their devices need more persistent, intimate access to our data than before. In the past, the way we used apps and pulled files and photos onto phones and computers was relatively siloed. AI needs an overview to connect the dots between what we do in apps, websites and communications, security experts say.
“Do I feel safe giving this information to this company?” Cliff Steinhauer, director at the National Cybersecurity Alliance, a nonprofit focused on cybersecurity, said of companies’ AI strategies.
All of this is happening because OpenAI’s ChatGPT revolutionized the tech industry nearly two years ago. Apple, Google, Microsoft, and others have since overhauled their product strategies, investing billions in new AI-driven services. necessary.
The biggest potential security risk with this change comes from a subtle change happening in the way our new devices work, experts say. Because AI can automate complex actions — like cleaning up unwanted objects from a photo — it sometimes requires more computing power than our phones can handle. This means more of our personal data may have to leave our phones to be dealt with elsewhere.
Information is transmitted to the so-called cloud, a network of servers that process requests. Once the information reaches the cloud, it could be seen by others, including company employees, bad actors and government agencies. And while some of our data has always been stored in the cloud, our most deeply personal, intimate data that was once only for our eyes—photos, messages, and emails—can now be linked and analyzed by a company on its servers.
Tech companies say they’ve paid too much to secure people’s data.
For now, it’s important to understand what will happen to our information when we use AI tools, so I got more information from the companies about their data practices and interviewed security experts. I plan to wait and see if the technologies work well enough before I decide if sharing my data is worth it.
Here’s what you need to know.
Apple Intelligence
Apple recently announced Apple Intelligence, a suite of AI services and its first major entry into the AI ​​race.
New AI services will be built into faster iPhones, iPads and Macs starting this fall. Users will be able to use it to automatically remove unwanted objects from photos, create summaries of web articles, and write replies to text messages and emails. Apple is also overhauling its voice assistant, Siri, to make it more conversational and give it access to data in all apps.
During Apple’s conference this month, when it introduced Apple Intelligence, the company’s senior vice president of software engineering, Craig Federighi, shared how it might work: Mr. Federighi pulled up an email from a colleague asking him to postpones a meeting, but was supposed to see a play that night starring his daughter. His phone then pulled up his calendar, a document detailing the project, and a map app to predict whether he would be late to the game if he agreed to a meeting later.
Apple said it tries to process most of its AI data directly on its phones and computers, which would prevent others, including Apple, from accessing the information. However, for jobs that need to be forwarded to servers, Apple said, it has developed safeguards, including encrypting the data and deleting it immediately.
Apple has also taken steps to prevent its employees from accessing the data, the company said. Apple also said it would allow security researchers to audit its technology to make sure it lives up to its promises.
But Apple has been vague about which new Siri requests could be sent to the company’s servers, said Matthew Green, a security researcher and associate professor of computer science at Johns Hopkins University who was briefed by Apple on its new technology. . Anything that leaves your device is inherently less secure, he said.
Microsoft’s AI laptops
Microsoft brings artificial intelligence to the old-fashioned laptop.
Last week, it began shipping Windows PCs called Copilot+ PCs, which start at $1,000. The computers contain a new type of chip and other equipment that Microsoft says will keep your data private and secure. Computers can create images and rewrite documents, among other new AI-powered functions.
The company also introduced Recall, a new system that helps users quickly find documents and files they’ve worked on, emails they’ve read or websites they’ve browsed. Microsoft compares Recall to the built-in photographic memory on your computer.
To use it, you can type casual phrases like “I’m thinking about a video call I had with Joe recently when he was holding an ‘I Love New York’ coffee mug.” The computer will then retrieve the video call recording containing those details .
To achieve this, Recall takes screenshots every five seconds of what the user is doing on the machine and compiles those images into a searchable database. The snapshots are stored and analyzed directly on the computer, so the data is not examined by Microsoft or used to improve its artificial intelligence, the company said.
However, security researchers have warned of potential risks, explaining that the data could easily reveal everything you’ve ever typed or viewed if compromised. In response, Microsoft, which had intended to release Recall last week, postponed its release indefinitely.
The computers are equipped with Microsoft’s new Windows 11 operating system. It has multiple layers of security, said David Weston, a company executive who oversees security.
Google AI
Google last month also announced a range of AI services.
One of the biggest revelations was a new AI-powered fraud detector for phone calls. The tool listens to phone calls in real time, and if the caller sounds like a potential fraudster (for example, if the caller asks for a bank PIN), the company notifies you. Google said people should turn on the fraud detector, which works entirely from the phone. This means that Google will not listen to the calls.
Google announced another feature, Ask Photos, that requires information to be sent to the company’s servers. Users can ask questions like “When did my daughter learn to swim?” for the first images of their child swimming to surface.
Google said its employees could, in rare cases, review Ask Photos conversations and photo data to address abuse or damage, and that the information may also be used to improve its Photos app. To put it another way, your question and the photo of your child swimming could be used to help other parents find images of their children swimming.
Google said its cloud was locked down with security technologies such as encryption and protocols to limit employee access to data.
“Our approach to protecting privacy applies to our AI functions, whether they’re powered on-device or in the cloud,” Suzanne Frey, Google’s executive who oversees trust and privacy, said in a statement.
But Mr. Green, the security researcher, said Google’s approach to AI privacy was relatively opaque.
“I don’t like the idea of ​​my very personal photos and my very personal searches going to a cloud that’s not under my control,” he said.