In February, we covered a Eurasian Economic Commission (EEC) filing that indicated two iPad models were nearing release. Sure enough, about a month later Apple announced two iPads during a special event in Chicago.
On Wednesday, Consomac reported on a new filing that details a total of 11 different iPhone model numbers have been approved by the EEC.
Details are, of course, scarce in the filing, outside of mentioning the devices will run iOS 11.
Considering the timing and Apple’s typical release schedule for its flagship line of smartphones in the fall, along with the constant swirl of rumors regarding an updated iPhone SE – we are likely looking at the model members for the iPhone SE 2.
Taking into account the track record of EEC filings appearing roughly a month before a product is released, we should expect an announcement for these particular iPhone model numbers towards the end of May or early June.
It just so happens that Apple’s developer conference, WWDC, is scheduled for early June, where the company has used the opening keynote to announce new hardware in the past.
Elon Musk is a great example of crowdfunding success. Not only did he raise $10 million selling flamethrowers , he may soon be the recipient of a crowdfunded couch. In an interview with CBS News last week, Musk mentioned the couch in his office at Tesla is, well, “not a good…
What is Apple planning?
You can check which of your apps run in 32-bit in About This Mac>System Report>Applications where you’ll find a column called 64-bit. Click this, and you’ll see which apps don’t yet run that way.
Apple’s DVD Player is one of these 32-bit apps, even though notes around the software claim it was last modified in the most recent macOS release — despite the version number being unchanged since 2015.
This modification failed to extend to 32-bit support. And that’s bad news because it means an essential software component used by thousands of Mac users to watch video on their machines has no future.
Rip, mix, burn
Not so long ago in 2001 Apple launched an iMac with the slogan “Rip, mix, burn.” Those systems had two big claims to fame: iTunes and a CD-RW drive, as well as Internet access built inside. A few years later, Apple introduced Apple TV, a “DVD player for the 21stCentury,” as Apple’s Steve Jobs termed it.
Things have changed since then.
I guess it’s easy to argue that with so much media content streamed or purchased online these days, there’s less need for an optical drive than there once was.
Mac users with extensive libraries of DVD classics and music loving Mac fans with huge collections of CDs, some of which aren’t available online, will likely disagree.
DVD sales still reached over a billion dollars in 2016. Sure, that’s down 20 percent on their peak at 2015, but those numbers still suggest a lot of people still use physical media.
Media consumers aren’t the only group of people that may need access to a DVD burner. How many enterprise execs still zip around with presentation slides on a DVD? How many movie editors like to burn early edits to DVD for feedback and sharing? There’s even a powerful case for using DVD as part of a Mac user’s backup strategy —but that’s going to be of little use if you can’t access content on those DVDs.
What about the SuperDrive?
These days, the only way to get hold of a DVD reading/writing optical drive is to invest in a $79 Apple SuperDrive — and if you use a modern MacBook Pro equipped with Thunderbolt 3, then you need to get a USB-C to USB Adapter to connect the device to your Mac.
This lack of compatibility at the high end of the Apple-verse is surely a clear message that the future of the Apple accessory doesn’t look bright.
To be fair, Apple’s isn’t the only external DVD burning/playing product you can get, and a glance at the 87 one-star reviews its accumulated at the Apple Store don’t exactly fill one with confidence. However, even when purchasing a third-party DVD/CD drive system, you must beware because many such devices simply aren’t supported by macOS, though Macworld UK has a few suggestions here.
Where’s the love?
This fate isn’t entirely unexpected.
Apple’s been phasing out optical drives in its Macs since it introduced the MacBook Air in 2008. The last model to include one was the 2012 13-inch MacBook Pro Apple sold until October 2016.
DVD Player isn’t the only Apple app that’s not feeling 64-bit love from the company, QuickTime Player 7 and iDVD 7.1.2 are also also stuck at 32, and third-party disk-burning solutions seem similarly abandoned. This suggests Apple has no intention of even enabling system support for third-party CD/DVD authoring solutions.
If that is the case, then Mac users hoping to use or burn DVDs or CDs will have to find alternative software (such as IINA) and/or (potentially) new hardware solutions to do the job. It also seems probable we’ll see increased second-user prices for Macs equipped with built-in SuperDrives.
What can we do instead?
Apple introduced something called Remote Disc Sharing when it launched the 2012 MacBook Pro. This lets you use an older Mac that has an optical drive as a remote disc player, which you can use to access data and play movies (but not audio). You can also choose to make virtual copies of DVDs you need to keep around on another Mac that has an optical drive if you have access to one.
However, with literally millions of CDs and DVDs still sold every year, it would probably be useful for many Mac users to learn exactly what kind of future they should plan for when it comes to playing and writing the CDs and DVDs they may already own.
Google+? If you use social media and happen to be a Google+ user, why not join AppleHolic’s Kool Aid Corner community and get involved with the conversation as we pursue the spirit of the New Model Apple?
Got a story? Please drop me a line via Twitter and let me know. I’d like it if you chose to follow me there so I can let you know about new articles I publish and reports I find.
Unlike today’s most popular wireless chargers, which require devices to rest on a charging pad, Ossia is among several companies developing trickle charging capability at distances of many feet. Some of these technologies, also known as “uncoupled wireless charging” can even charge through walls, or simply top off a device as you enter a room.
Ossia’s Cota technology uses radio frequency (RF) to send power and data over distances greater than 15 feet. Cota Transmitters can link to charge dozens of mobile devices within a several meter radius, and the transmitters come in multiple form factors, including a drop ceiling tile.
By linking two Cota ceiling tiles, the system can power mobile devices in a typical coffee shop, office or other crowded spaces, the company claims.
The company has also developed AA batteries that can receive a wireless charge.
In 2013, after six years in development, Ossia unveiled the first of its Cota wireless charging prototypes, a small dongle for smartphone charging; at the time, the company said its technology would be available to consumers and enterprises by 2015. That didn’t happen.
Now, Ossia claims that through its new partnership with Molex it should have a product to market by the second half of this year, according to Jennifer Grenz, vice president of marketing at Ossia. Molex, an 80-year-old manufacturer based in Illinois, helped develop the first car radio, the first cell phone and the first HDTV.
Through tiny antennas, Cota-enabled wireless devices can be activated, managed, and monitored via the Cota Cloud networking platform. Cota power receivers are small enough to be embedded into IoT devices, wearables, and many other electronics.Ossia
Ossia has miniaturized its chipset and placed it into AA batteries, allowing any device using them to be wirelessly charged over distances greater than 15 feet.
Lily Yeung, vice president of Molex Ventures, said her company first engaged with Ossia two years ago with a round of funding.
Last week, the companies announced a collaboration on next-generation Cota wireless charging antenna designs. The project will include several new patch antenna designs that will service a variety of commercial applications.
Ossia’s first products will likely be rolled out for a number of different markets; early adopters will likely be in the Internet of Things (IoT) area where modules and sensors could not only be charged but transmit their data via the Cota technology.
One of the hurdles for untethered wireless charging technology, Yeung said, has been achieving Federal Communications Commission (FCC) approval.
An Ossia competitor, Energous, finally received FCC approval in December when the FCC certified the company’s “power-at-a-distance” transmitter.
“That’s a good indication regulatory bodies are embracing the technology,” Yeung said.
Michael Leabman, founder and CTO of Energous, demonstrates how one of the company’s wireless charging routers can send power over distances.
The company claims its Energous WattUp Mid Field transmitter can trickle charge smartphones, watches, speakers and other devices up to 15 feet away.
Like Ossia, Energous has partnered with a large electronics manufacturer, Dialog Semiconductor, to help it distribute its technology.
Yeung sees retail and manufacturing as the most promising markets for Ossia’s uncoupled charging technology.
For example, factories use IoT devices to transmit all kinds of data between machinery for production analysis and maintenance. Smart buildings contain hundreds or thousands of sensors for HVAC and other systems, which would be kept charged and connected. Grocery stores use electronic price tags above shelves that could be charged by a wireless router and adjust the price of produce or other perishable items as they age or as sales occur.
“Millions of dollars are lost every year through items being thrown away when they’re kept on a shelf too long,” Yeung said.
Manufacturing is expected to spend $189 billion on IoT in 2018, according to IDC. By 2021, global IoT spending is expected to total nearly $1.4 trillion as organizations continue to invest in the hardware, software, services, and connectivity that enable the IoT.
Typically, uncoupled wireless charging technology sends low levels of energy (typically less than 1 watt) across a room. Ossia, Energous and uBeam all demonstrated uncoupled charging technology at CES earlier this year.
How wireless charging works
Last September, Apple announced its AirPower wireless charger would be available sometime in 2018. It has remained mum since then. A month after announcing the AirPower charger, Apple purchased New Zealand-based PowerByProxi, whose wireless charging technology can send power to multiple devices, from headphones to remote controls, at the same time. PowerByProxy makes a number of charging device form factors, including a box into which multiple items can be dropped to charge.
As appealing as that may sound for mobile devices, Apple likely hopes to use the technology for a vast array of electronics such as the Apple TV remote control or its own computer mouse – and perhaps even for industrial applications.
Apple has settled on the popular Qi wireless charging specification for its mobile devices – from the iPhone 8 and X smartphones to its Apple Watch and its wirelessly chargeable AirPod ear buds. So, what it may do with PowerByProxi’s intellectual property remains to be seen.
PowerbyProxi is a component company, so the wireless chargers it creates for demonstrations are proofs of concept. The company has partnerships with firms such as Samsung, Texas Instruments (TI) and Linear who choose to build hardware based on the working prototypes.
PowerByProxi’s latest prototype of wireless charging is the Proxi-Com, which can transmit both power and data, just as Ossia claims its technology can do. Initially, PowerByProxi’s wireless device supports three common protocols used in industrial applications: a CAN bus for automobiles, Ethernet and Digital GPIO (general purpose input/output circuit).
Ossia, through a partnership with Stutgart, Germany-based Motherson Innovations, is working on bringing the Cota wireless power system into the interior of private and public vehicles by 2021 to not only deliver continuous wireless power to occupants’ personal devices, but various vehicle sensors.
Video: For the iPhone X, the price is a whole lot more than the sum of the parts.
The iPhone X accounted for 35 percent of worldwide smartphone profits in the last quarter of 2017, despite reports of slower than usual sales and even though it was only available for two months.
The estimate comes from a new report by Counterpoint Research, which says the iPhone X generated five times more profit than the combined profits of over 6,000 Android OEMs during the quarter.
Apple overall fared well compared with the handheld industry, which saw worldwide profits shrink one percent year over year. Apple’s profits grew one percent year over year.
Taking into consideration all iPhone models, Apple raked in 86 percent of global handset profits, with the iPhone 8 and iPhone 8 Plus accounting for 19.1 percent and 15.2 percent, respectively. Meanwhile, last year’s iPhone 7 and iPhone 7 Plus together accounted for 11.2 percent of profits.
This continues the pattern of Apple dominating handset industry profits while Android OEMs fight over as little as 10 percent of the remaining profits.
Of the top 10 phones by profitability in the quarter, the only non-iPhone devices in the list are Samsung’s Galaxy Note 8 and Galaxy S8 Plus, which had a share of 3.9 percent and 1.7 percent, respectively.
Apple hasn’t revealed how many of the $1,000 iPhone X units it sold in Q4 2017, but said in its Q1 report that it has been the top-selling iPhone since it began shipping in November. Apple reported $61.5bn in iPhone revenues for the quarter, up 13 percent year over year. Analyst firm Canalys estimated it shipped 29 million iPhone X units in the quarter.
Despite several financial analysts cutting back Q1 2018 iPhone X sales forecasts, in some cases to as few as 14 million units, Counterpoint Research Analyst, Karn Chauhan, reckons there’s still room for the iPhone X’s share of profits to grow.
“The iPhone X alone generated 21 percent of total industry revenue and 35 percent of total industry profits during the quarter and its share is likely to grow as it advances further into its life cycle,” said Chauhan.
“Additionally, the longer shelf life of all iPhones ensured that Apple still has eight out of the top ten smartphones, including its three-year-old models, generating the most profits compared with current competing smartphones from other OEMs.”
Apple’s success means even iPhones that are two generations behind the current models are still more profitable than new handsets from Chinese OEMs, which collectively generated revenues of $1.3bn in the quarter. Huawei was the top performer in China, according to Counterpoint Research.
Image: Counterpoint Research
Previous and related coverage
A significant number of Apple iPhone owners are avoiding the iPhone X and will probably upgrade to 2018’s batch.
Even with a good performance from Apple’s newest devices, smartphone sales dropped dramatically in the last three months of 2017.
The iPhone, iPad, and Mac maker released its latest, post-holiday season earnings.
iPhone sales slip 1 percent compared to the year-ago quarter despite all the hype surrounding the iPhone X.
Moving terabytes or even petabytes of data to the cloud is a daunting task. But it is important to look beyond the number of bytes. You probably know that your applications are going to behave differently when accessed in the cloud, that cost structures will be different (hopefully better), and that it will take time to move all that data.
Because my company, Data Expedition, is in the business of high-performance data transfer, customers come to us when they expect network speed to be a problem. But in the process of helping companies overcome that problem, we have seen many other factors that threaten to derail cloud migrations if left overlooked.
Collecting, organizing, formatting, and validating your data can present much bigger challenges than moving it. Here are some common factors to consider in the planning stages of a cloud migration, so you can avoid time-consuming and expensive problems later.
Cloud migration bottleneck #1: Data storage
The most common mistake we see in cloud migrations is pushing data into cloud storage without considering how that data will be used. The typical thought process is, “I want to put my documents and databases in the cloud and object storage is cheap, so I’ll put my document and database files there.” But files, objects, and databases behave very differently. Putting your bytes into the wrong one can cripple your cloud plans.
Files are organized by a hierarchy of paths, a directory tree. Each file can be quickly accessed, with minimal latency (time to first byte) and high speed (bits per second once the data begins flowing). Individual files can be easily moved, renamed, and changed down to the byte level. You can have many small files, a small number of large files, or any mix of sizes and data types. Traditional applications can access files in the cloud just like they would on premises, without any special cloud awareness.
All of these advantages make file-based storage the most expensive option, but storing files in the cloud has a few other disadvantages. To achieve high performance, most cloud-based file systems (like Amazon EBS) can be accessed by only one cloud-based virtual machine at a time, which means all applications needing that data must run on a single cloud VM. To serve multiple VMs (like Azure Files) requires fronting the storage with a NAS (network attached storage) protocol like SMB, which can severely limit performance. File systems are fast, flexible, and legacy compatible, but they are expensive, useful only to applications running in the cloud, and do not scale well.
Objects are not files. Remember that, because it is easy to forget. Objects live in a flat namespace, like one giant directory. Latency is high, sometimes hundreds or thousands of milliseconds, and throughput is low, often topping out around 150 megabits per second unless clever tricks are used. Much about accessing objects comes down to clever tricks like multipart upload, byte range access, and key name optimization. Objects can be read by many cloud-native and web-based applications at once, from both within and outside the cloud, but traditional applications require performance crippling workarounds. Most interfaces for accessing object storage make objects look like files: key names are filtered by prefix to look like folders, custom metadata is attached to objects to appear like file metadata, and some systems like FUSE cache objects on a VM file system to allow access by traditional applications. But such workarounds are brittle and sap performance. Cloud storage is cheap, scalable, and cloud native, but it is also slow and difficult to access.
Databases have their own complex structure, and they are accessed by query languages such as SQL. Traditional databases may be backed by file storage, but they require a live database process to serve queries. This can be lifted into the cloud by copying the database files and applications onto a VM, or by migrating the data into a cloud-hosted database service. But copying a database file into object storage is only useful as an offline backup. Databases scale well as part of a cloud-hosted service, but it is critical to ensure that the applications and processes that depend on the database are fully compatible and cloud-native. Database storage is highly specialized and application-specific.
Balancing the apparent cost savings of object storage against the functionality of files and databases requires careful consideration of exactly what functionality is required. For example, if you want to store and distribute many thousands of small files, archive them into a ZIP file and store that as a single object instead of trying to store each individual file as a separate object. Incorrect storage choices can lead to complex dependencies that are difficult and expensive to change later.
Cloud migration bottleneck #2: Data preparation
Moving data to the cloud is not as simple as copying bytes into the designated storage type. A lot of preparation needs to happen before anything is copied, and that time requires careful budgeting. Proof-of-concept projects often ignore this step, which can lead to costly overruns later.
Filtering out unnecessary data can save a lot of time and storage costs. For example, a data set may contain backups, earlier versions, or scratch files that do not need to be part of the cloud workflow. Perhaps the most important part of filtering is prioritizing which data needs to be moved first. Data that is being actively used will not tolerate being out of sync by the weeks, months, or years it takes to complete the entire migration process. The key here is to come up with an automated means of selecting which data is to be sent and when, then keep careful records of everything that is and is not done.
Different cloud workflows may require the data to be in a different format or organization than on-premises applications. For example, a legal workflow might require translating thousands of small Word or PDF documents and packing them in ZIP files, a media workflow might involve transcoding and metadata packing, and a bioinformatics workflow might require picking and staging terabytes of genomics data. Such reformatting can be an intensely manual and time-consuming process. It may require a lot of experimentation, a lot of temporary storage, and a lot of exception handling. Sometimes it is tempting to defer any reformatting to the cloud environment, but remember that this does not solve the problem, it just shifts it to an environment where every resource you use has a price.
Part of the storage and formatting questions may involve decisions about compression and archiving. For example, it makes sense to ZIP millions of small text files before sending them to the cloud, but not a handful of multi-gigabyte media files. Archiving and compressing data makes it easier to transfer and store the data, but consider the time and storage space it takes to pack and unpack those archives at either end.
Cloud migration bottleneck #3: Information validation
Integrity checking is the single most important step, and also the easiest to get wrong. Often it is assumed that corruption will occur during the data transport, whether that is by physical media or network transfer, and can be caught by performing checksums before and after. Checksums are a vital part of the process, but it is actually the preparation and importing of the data where you are most likely to suffer loss or corruption.
When data is shifting formats and applications, meaning and functionality can be lost even when the bytes are the same. A simple incompatibility between software versions can render petabytes of “correct” data useless. Coming up with a scalable process to verify that your data is both correct and useable can be a daunting task. At worst, it may devolve into a labor-intensive and imprecise manual process of “it looks okay to me.” But even that is better than no validation at all. The most important thing is to ensure that you will be able to recognize problems before the legacy systems are decommissioned!
Cloud migration bottleneck #4: Transfer marshaling
When lifting a single system to the cloud, it is relatively easy to just copy the prepared data onto physical media or push it across the Internet. But this process can be difficult to scale, especially for physical media. What seems “simple” in a proof-of-concept can balloon to “nightmare” when many and varied systems come into play.
A media device, such as an AWS Snowball, must be connected to each machine. That could mean physically walking the device around one or more data centers, juggling connectors, updating drivers, and installing software. Connecting over the local network saves the physical movement, but software setup can still be challenging and copy speed may drop to well below what could be achieved with a direct Internet upload. Transferring the data directly from each machine over the Internet saves many steps, especially if the data is cloud-ready.
If data preparation involves copying, exporting, reformatting, or archiving, local storage can become a bottleneck. It may be necessary to set up dedicated storage to stage the prepared data. This has the advantage of allowing many systems to perform preparation in parallel, and reduces the contact points for shippable media and data transfer software to just one system.
Cloud migration bottleneck #5: Data transfer
When comparing network transfer to media shipment, it is easy to focus on just the shipping time. For example, an 80 terabyte AWS Snowball device might be sent by next-day courier, achieving an apparent data rate of more than eight gigabits per second. But this ignores the time it takes to acquire the device, configure and load it, prepare it for return, and allow the cloud vendor to copy the data off on the back-end. Customers of ours who do this regularly report that four-week turnaround times (from device ordering to data available in the cloud) are common. That brings the actual data transfer rate of shipping the device down to just 300 megabits per second, much less if the device is not completely filled.
Network transfer speeds likewise depend on a number of factors, foremost being the local uplink. You can’t send data faster than the physical bit rate, though careful data preparation can reduce the amount of data you need to send. Legacy protocols, including those that cloud vendors use by default for object storage, have difficulty with speed and reliability across long-distance Internet paths, which can make achieving that bit rate difficult. I could write many articles about the challenges involved here, but this is one you do not have to solve yourself. Data Expedition is one of a few companies that specialize in ensuring that the path is fully utilized regardless of how far away your data is from its cloud destination. For example, one gigabit Internet connection with acceleration software like CloudDat yields 900 megabits per second, three times the net throughput of an AWS Snowball.
The biggest difference between physical shipment and network transfer is also one of the most commonly overlooked during proof-of-concept. With physical shipment, the first byte you load onto the device must wait until the last byte is loaded before you can ship. This means that if it takes weeks to load the device, then some of your data will be weeks out of date by the time it arrives in the cloud. Even when data sets reach the petabyte levels where physical shipment may be faster over all, the ability to keep priority data current during the migration process may still favor network transfer for key assets. Careful planning during the filtering and prioritization phase of data preparation is essential, and may allow for a hybrid approach.
Getting the data into a cloud provider may not be the end of the data transfer step. If it needs to be replicated to multiple regions or providers, plan carefully how it will get there. Upload over the Internet is free, while AWS, for example, charges up to two cents per gigabyte for interregional data transfer and nine cents per gigabyte for transfer to other cloud vendors. Both methods will face bandwidth limitations that could benefit from transport acceleration such as CloudDat.
Cloud migration bottleneck #6: Cloud scaling
Once data arrives at its destination in the cloud, the migration process is only half finished. Checksums come first: Make sure that the bytes that arrived match those that were sent. This can be trickier than you may realize. File storage uses layers of caches that can hide corruption of data that was just uploaded. Such corruption is rare, but until you’ve cleared all of the caches and re-read the files, you can’t be sure of any checksums. Rebooting the instance or unmounting the storage does a tolerable job of clearing caches.
Validating object storage checksums requires that each object be read out into an instance for calculation. Contrary to popular belief, object “E-tags” are not useful as checksums. Objects uploaded using multipart techniques in particular can only be validated by reading them back out.
Android security sure can seem like a scary subject.
And it’s no wonder: Every few weeks, we see some new hair-raising headline about how our phones are almost certain to be possessed by demons that’ll steal our data, eat our ice cream, and pinch our tenders when we least expect it.
This week, it’s a series of Android malware monsters known as “ViperRat” and “Desert Scorpion” that has phone-holders everywhere trembling in their bootsies. (Kudos to whoever came up with those spooky-sounding names, by the way. It’s an art!) Last week, it was word that Android device-makers might be skipping security updates that had our hands a-shakin’.
These sorts of stories can certainly be disconcerting (especially that second one, which is less about the typical malware, directly, and more about a potential act of deception — “potential” being the key word for now, though). But you know what? From a regular user’s perspective, these electrifying tales are almost never cause for alarm.
Before the inevitable next Android security scare comes along, take a moment to refresh yourself on six security facts that’ll help you breathe a little easier and leave the hyperventilating for something that actually deserves it.
1. Android malware can’t magically install itself on your phone
When we talk about “malware,” most people envision a plague-like force that finds its way onto your phone and then sneakily undermines you. But guess what? Even in a worst-case scenario on Android, that just isn’t how things work.
In order for something to “take over” your Android device — or do much of anything, really — you’d first have to manually install it and then grant it access to any relevant permissions. Most of the talk about malware on Android relies on the assumption that the user has done both of those things, be it intentionally or via manipulation. But that’s a pretty big assumption to make.
2. Even if it is somehow installed, Android malware is highly unlikely to be able to access any sensitive data
Android works with a system of sandboxing that keeps each app separate from other areas of the device and limits the ways in which it can go beyond those barriers. On enterprise devices, an additional fence is in place to keep personal and company data isolated.
According to Android’s recently departed security director (whom I interviewed for a story late last year), the vast majority of active Android malware revolves around attempts to make money by abusing advertising, engaging in botnet-like behavior, utilizing clickfraud, or conducting SMS spoofing. Google’s latest Android Security Year In Review report, which came out just last month, presents a similar conclusion based on all of Google’s internal data from the past year.
To put it in simpler terms, Android malware is mostly the terrain of low-level pickpockets who pounce on easy opportunities to snag dangling dollars — usually indirectly, at that — and not sophisticated identity thieves who infiltrate their victims’ lives.
3. Android security has multiple layers
Hearing that your phone might not have the most recent Android security patch is upsetting — and it should be. Android’s monthly security patches absolutely do matter. But they’re also a single part of a much bigger Android security picture, one in which no single layer by itself is typically a make-or-break element.
Much of Android’s security is at its core, with factors like the aforementioned sandboxing along with the platform’s permissions system, encryption system, and Verified Boot system. These are the types of areas we see improve with OS updates each year (like with Oreo in 2017 and Android P now — a perfect example, as I’ve said before, of why OS updates unequivocally matter). Even by themselves, they make most types of truly damaging “infections” incredibly difficult to achieve.
Then there’s Google Play Protect, which continously scans the Play Store and your actual device for signs of suspicious behavior (and remains active and up to date independently, without the need for any manufacturer- or carrier-provided rollouts). And yes, that system does occasionally fail, but (a) that happens far less frequently than Android security headlines would lead you to believe — more on that in a moment — and (b) such constant challenging and adaption is an inevitable part of any security system.
Beyond that, Chrome on Android keeps an eye out for any website-based threats, and Android itself monitors for signs of SMS-based scams and warns you if any such signals are detected.
All combined, that brings us to our next point:
4. Your odds of actually encountering Android malware in the real world are almost laughably low
I’ve often said that Android malware tends to be more theoretical than practical, and it’s true: Most Android security scare stories fail to take into account all the layers of protection mentioned above and the fact that few, if any, regular people are actually in danger by whatever new threat happens to have come along.
First, for perspective: Based on Google’s 2017 data, the probability of downloading a “potentially harmful app” from the Play Store is about 0.02%. Less than a tenth of a percent of active Android devices worldwide encountered such a scenario last year. Even for the minority of folks who download apps from sources outside of the Play Store, we’re looking at 0.82% of all devices, globally, being affected by any “potentially harmful app” over the last year.
And don’t forget, too, what we’re actually talking about when we discuss these types of apps — things like the “Gaiaphish” family of malware, which made up the majority of the titles in the most-observed category of “potentially harmful apps” from Google’s 2017 report. What does the “Gaiaphish” family do, you might wonder? It “uses Google account authentication tokens on Android devices to fraudulently manipulate parts of Google Play, such as star ratings or comments.”
5. Spreading fear over Android malware is serious business
Whenever you see a story about some scary-sounding new Android security threat, take a moment to cross-reference the name of the company behind the research. With rare exception, you’ll find it’s a company that makes its money by selling — yup, you guessed it — security software for Android.
That’s not to say you shouldn’t believe anything the company says because of that, but you absolutely should consider the firm’s motivation as part of the context. All of these companies work tirelessly to market security scares on Android because, quite simply, keeping people convinced that Android is scary is what keeps them in business.
That’s also why their marketing campaigns (and that’s ultimately what they are) consistently overplay the risks involved with a threat while downplaying the layers of protection already in place to combat it — layers that, in most scenarios, make the threats of little real-world concern for the vast majority of Android users.
6. Your own common sense goes a long way in keeping you protected
All else aside, basic security hygiene is worth a heck of a lot when it comes to Android security.
Look at something before you download it, especially if it’s something you haven’t heard of anywhere else and that isn’t from an obviously reputable source. Look at the reviews. Look at the permissions the app asks for and think about whether they make sense — and whether you’re comfortable providing them. Click the name of the developer and see what else they’ve created.
Unless you really know what you’re doing, don’t download apps from random websites or other unestablished third-party sources. Don’t accept requests for permissions without understanding what they’re asking. And if you ever see a prompt asking you to install something you don’t recognize, don’t authorize it.
I’ve said it before, and I’ll say it again: With all due respect to the dodos of the world, it doesn’t take a rocket scientist to stick with reputable-looking apps and avoid questionable creations.
Bonus: 4 questions to ask every time you see an Android security story
I’ll end with a short series of questions I came up with a while back to help evaluate any Android security story. The questions are incredibly effective and will save you countless hours of undue stress.
- Who’s behind the “research” driving this story, and what is their motivation?
- Is this threat related to something I’m likely to download and install, or does it revolve around some weird random app no normal person would ever encounter?
- On the off-chance that I did somehow install the trigger, would my phone automatically protect me from anything harmful?
- Has any normal user actually been affected by this in the real world?
Think through those questions carefully — and make sure you’re always keeping up with your own Android security hygiene — and you’ll find there’s rarely a reason to exert much energy, no matter how much huffing and puffing the latest Android malware monster may attempt.
Sign up for JR’s new weekly newsletter to get this column along with bonus tips, personal recommendations, and other exclusive extras delivered to your inbox.
Two big developments in the arena of cyber security: First, the US and Britain issued a joint warning, and an unusually strong one at that, about Russia. Second, the White House is losing its well-regarded “cyber czar,” along with his boss. On the first development, the New York Times reports…