Hardware virtualization was a great step forward in application hosting compared to the days of bare metal. Hypervisors allowed us to isolate multiple applications within one hardware platform, freeing us to use hardware resources more efficiently by hosting heterogeneous workloads on the same infrastructure. Still, virtual machines have massive overhead in terms of resource consumption, because each VM runs a fully dedicated operating system.
Containerization advances the benefits of virtualization much further by allowing containers to share the OS kernel, networking stack, file system, and other system resources of the host machine, all while using less memory and CPU overhead.
If your organization is wary about making the transition from VMs to containers, consider the following advantages of containers:
- Far more efficient resource utilization than with VMs.
- Easier scaling—resizing container limits can be achieved on the fly, without a reboot.
- Faster provisioning and start times for containers compared to VMs.
- More granular resource provisioning, and the ability to share resources among containers on the same host.
- Publicly available container templates based on the Docker packaging standard make it is easy to create new images for specific projects.
Jelastic is a PaaS provider that offers managed containerized application environments for a wide variety of programming languages including Java, PHP, Ruby, Node.js, Python, and .Net. The platform was initially built on Virtuozzo containers, so when Docker’s container technology arrived, we already had strong expertise in containerization and quickly added support for the Docker standard. Because we support multi-cloud options (allowing the Jelastic PaaS to be used on various IaaS platforms), our clients can maximize container portability benefits by mixing and matching different cloud services and managing them through the same UI and API.
The Jelastic PaaS helps developers to migrate from hypervisors to containers more quickly and cost-effectively without application code changes. To show how we enable this transition, I will describe the structure and scaling process of a VM-based application running in GlassFish (Java EE application server), and how this monolith can be decomposed to containers using our technology.
Starting point: GlassFish Server in a VM
In a typical GlassFish-based environment, the domain administration server (DAS) is a central management point from which all resources and applications in the cluster are configured and propagated to each worker Instance. Those workers host your web applications, web services, and other resources.
If you need to add a bit more resources to the same VM, you’ll need to move to a bigger machine, which in most cases will lead to downtime during the migration. This makes VMs inefficient in terms of vertical scaling. Furthermore, in order to scale a GlassFish cluster horizontally in VMs, you’ll have to
- Provision a new VM with preconfigured GlassFish template.
- Configure SSH connect and add this VM as an SSH node to the DAS.
- Create a new remote worker Instance on a node via the DAS UI or admin CLI.
This process must be repeated every time you reach the scaling limit and need more resources. These steps require a significant amount of time for implementation and can lead to human errors that negatively influence the whole project.
Making the move: Virtual machines to containers
To migrate from VMs, monolithic application topology needs first to be decomposed into small logical pieces distributed among a set of interconnected containers. Each application component should be placed inside an isolated container. This approach can simplify the application topology in general, as some specific parts of the project may become unnecessary within a future architecture.
There are two types of containers that can be chosen for running the project: application and system containers. An application container (such as Docker) typically runs little more than a single process. It is more appropriate for building new projects, as it is relatively easy to create the required images using publicly available Docker templates and meet the requirements of the cloud-native approach from the very beginning.
A system container (LXD, Virtuozzo) behaves more like a full OS. It can run full-featured init systems like Systemd, SysVinit, and OpenRC that allow processes to spawn other processes like OpenSSH, Crond, or Syslogd together inside a single container. This is more preferable for monolithic and legacy applications, as it lets you reuse architectures and configurations implemented in the original design for running in VMs.
System containers offer multiple benefits when migrating an existing legacy application. IP addresses, hostnames, and locally stored data can survive container downtimes, there’s no need for port mapping, and you gain a far better isolation and virtualization of resources. Plus you get compatibility with SSH-based config tools and even hibernation and live migration of the memory state. The only perceptible disadvantage compared to application containers might be a slower start-up time as system containers are a bit heavier due to the additional services required for running multiple processes.
In Jelastic, it is possible to run both application and system containers. In contrast to other PaaS vendors that use the so-called Twelve-Factor App methodology, Jelastic does not force customers to use any specific approach or application design in order to deploy cloud-native microservices and legacy monoliths. Nor must developers modify their code to a proprietary API in order to deploy applications to the cloud. The projects can be up and running in just minutes using an application archive (for example) or just the link to the project in GitHub. This makes the entry point easier and more seamless, reducing go-to-market time and eliminating vendor lock-in.
Destination: GlassFish Server in a container
The components of a GlassFish cluster are the same in a container as in a VM, but they are distributed across separate isolated containers.
Worker nodes can be added or removed automatically, as well as attached to a DAS node using a container orchestration platform and a set of natively integrated automation scripts.
Jelastic hides much of the complexity of migration and automates scaling and clustering. Once the application is running in a containerized environment, the PaaS handles all the steps of the application lifecycle and providing easy creation of dev, test, and production environments and vertical and horizontal scaling out of the box. For example, there is already a pre-configured highly available GlassFish cluster with automatic provisioning of replicated instances. So developers do not need to write custom scripts to implement the required logic and can concentrate on coding.
Migrating from VMs to containers might seem daunting, but it’s actually pretty straightforward—and the long-term benefits are significant. By moving to containers you will reduce system resource consumption, simplify horizontal scaling, enable automatic vertical scaling without a restart, and increase infrastructure utilization by hosting different applications within a single physical server, all without compromising security. The final benefit that must be highlighted is the inherent portability of containers. The ability to move your containers across multiple cloud vendors means your organization can avoid vendor lock-in and take advantage of ever-declining cloud service costs.
Use of the Jelastic PaaS not only can ease the migration process, but also simplify the scaling and management of your containerized workloads throughout the application lifecycle. With the right automation and orchestration, developers and IT ops teams can realize the benefits of containers without deep knowledge of the technology, and instead focus attention on their core work.
Ruslan Synytsky is CEO and co-founder of Jelastic, provider of a cloud platform that simplifies application hosting inside containers. An expert in large-scale distributed applications and enterprise platforms, Ruslan has led engineering and software architecture teams at Jelastic, Inntelligenz, and SolovatSoft. He was one of the key engineering leads at the National Space Agency of Ukraine.
New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to email@example.com.
Containers provide a lightweight way to take application workloads portable, like a virtual machine but without the overhead and bulk typically associated with VMs. With containers, apps and services can be packaged up and moved freely between physical, virtual, or cloud environments.
Docker, a container creation and management system created by Docker, Inc., takes the native container functionality found in Linux and makes it available to end-users through a command-line interface and a set of APIs.
Many common application components are now available as prepackaged Docker containers, making it easy to deploy stacks of software as decoupled components (the “microservices” model). That said, it helps to know how the pieces fit together from the inside out.
Thus, in this guide, we’ll install the Apache web server in a Docker container and investigate how Docker operates along the way.
Mobile is making itself felt in retail in obvious and not-so-obvious ways. But Visa and Kroger are dealing with mobile in very different ways, with Visa — perhaps a decade too late, but late is better than never — conceding that the authentication of mobile payments makes signing for a purchase no longer necessary. Meanwhile, Kroger is pushing mobile checkout but still wants shoppers to wait in line to pay.
Let’s start with Visa. In a very significant — and long overdue — move, Visa last week (Jan. 12) joined fellow card brands MasterCard, American Express and Discover in signaling an end to the payment signature, as of April in Visa’s case. Technically, the brands merely said that signatures are no longer required, but given that retailers have begged for the end of signature for years, as a practical matter, it will be gone in the U.S. before the summer arrives.
A lot of factors are behind this decision (EMV, in-store video cameras tracking purchases, the lack of meaningful signature analysis at the POS, etc.), but what pushed signature over the cliff was mobile.
We noted the lunacy back in May 2016, when EMV started kicking in. Until then, shoppers could make mobile payments authenticated with a finger scan (and today, slowly, facial recognition) and be on their way. But as we noted back then, the EMV change forced shoppers to be called back to the POS to sign for their purchase.
Remember way back when you took your SATs? They had those comparisons we had to complete: A is to B, as C is to ___. Well, the authentication of biometrics is to the authentication of signature as a nuclear warhead is to a spitball. Actually, that’s not fair. A spitball can indeed cause momentary pain, whereas signature in 2018 doesn’t deliver any authentication at all. How many handwriting experts are working at Walmart or Macy’s? Why even bother?
The real issue behind that “come on back here and sign” situation involved the lack of visibility into the payment method. But once that was cleaned up, it became clear that signature was pointless. Granted, signature has been pointless for more than a decade, but mobile payments’ far superior authentication made it ludicrous.
It was late last year that MasterCard, American Express and Discover all announced the end of the signature requirement, with Visa (by far the largest of the payment brands) waiting until January to do the same. But at least they all ultimately concluded that fighting mobile payments no longer made sense.
As for Kroger, that lesson has yet to be learned. Back in October, Kroger — a $115 billion retailer with almost 2,800 stores — noted a variety of impressive (impressively vague) technology plans: “Kroger will continue building its Internet of Things sensor network, video analytics and machine learning networks and complement those innovations with robotics and artificial intelligence to transform the customer experience.” Kroger spokespeople have declined to get more specific.
But one specific effort that the company has detailed somewhat is its plans to “redesign front-end to maximize stores for self-checkout, include expanding currently 20-store Scan, Bag, Go pilot to 400 stores in 2018.”
Business Insider probed a bit deeper into that pilot. Here’s the key quote: “Shoppers scan the barcodes of items they wish to purchase using a handheld scanner, provided by Kroger, or the chain’s ‘Scan, Bag, Go’ app on any smartphone. The technology will keep a running tab of shoppers’ total order and offer applicable coupons. It will also eventually alert customers when they walk past an item on their shopping list. When customers are finished shopping, they can visit a self-checkout register to pay for their order. Soon, shoppers will be able to skip that step and provide payment through the app instead.”
Let’s drill into that smartphone app, which is the appropriate method. The vast majority of Kroger shoppers already walk into the stores with smartphones, so why make them use another device? I love the idea to “alert customers when they walk past an item on their shopping list,” but it’s not clear how that will be done. Will it use item-level RFID to truly note what every SKU is, or will it merely use a planogram from the store showing where items are supposed to be rather than where they truly are? Will it flag when the specific product (raspberry-flavored corn flakes in the 9-ounce containers) is out of stock, or will it make shoppers make a pointless search for something that isn’t there?
Those capabilities aside, it’s a good first step. My question is why Kroger would go this far and yet not initially include mobile payment. Put another way, why have the smartphone scan every item, total it and then force the shopper to stand in line to pay — when excellent mobile payment options have been around for years?
The most frustrating part is that Kroger execs will likely decide whether to move to mobile payment based on the participation rate. What is that logic? If it performs poorly — and who is to say what constitutes “poorly”? — then it will be dismissed as a failure. Why not start with mobile payments and truly embrace current technology and shopping convenience?
I asked Kroger what is delaying mobile payments, but it didn’t reply.
Lisa Davis, a vice president at Salt Lake City-based analytics firm InMoment, shared an interesting theory about why Kroger is doing this in a way that makes its effort “more clunky, more frustrating.” She speculated that it’s not an IT hurdle as much as an LP (loss prevention) hurdle. Kroger’s LP mechanisms, including positioned security cameras and associates watching for theft at the self-checkout, would be ineffective against an in-aisle checkout system. In short, Kroger’s LP infrastructure is not yet able to support mobile checkout.
“When they roll out technology, they forget about looking at it from the customer’s perspective. They are missing a really critical touchpoint in this journey,” Davis said. “Their processes need to be completely reimagined.”
Davis’ point is a good one. When a company such as Kroger rolls out a better way to shop, it needs to change all of its processes to accommodate that.
The choice for retailers today is to reluctantly embrace mobile, as Visa is doing, or to try to force mobile to accommodate existing retail infrastructure, which oddly seems to be Kroger’s approach.
Prices of bitcoin and other digital currencies skidded Tuesday after South Korea’s top financial policymaker said a crackdown on trading of cryptocurrencies was still possible, the AP reports. Finance Minister Kim Dong-yeon said in an interview with local radio station TBS that banning trading in digital currencies was “a live…
Apple CEO Tim Cook visiting supplier Foxconn. (Image: file photo)
Catcher Technology, a Taiwan-based factory making casings for Apple’s iPhone and Mac, violated 14 of Apple’s supplier-responsibility standards, according to China Labor Watch and Bloomberg reports on Tuesday.
From October 2017 to January 2018, CLW conducted an investigation at a Catcher factory based in Suqian. Major issues found at Catcher included occupational health and safety, pollution, and work schedule, the report said.
Catcher is a previous violator of work rights. CLW found in 2014 rights violations including discriminatory hiring policies, lack of safety training, long work hours, and low wages.
In its recent report on Catcher, CLW detailed a work schedule that saw workers losing overtime pay:
The Catcher factory schedules Saturdays as overtime with workers being paid double time, and Sundays as days off. However, the factory has now adopted a “seven shifts, six rotations” work schedule. From Monday to Friday, workers take turns in having a day off; which means that workers have their day off earlier in the week but then make up that day of work later on. Saturdays are used to make up and is therefore not paid double time, and Sundays are still counted as regular workdays. Workers affected by this schedule lose around 500 RMB ($76.57 USD) every month in overtime pay.
CLW found that on the morning of May 25, 2017 there was a toxic gas poisoning incident at Catcher’s A6 workshop. The incident resulted in the hospitalization of 90 workers, with five admitted to intensive care, the report said.
CLW investigators also found wastewater at the factory had an excess of white foam, dispensed directly into the public waste system.
Bloomberg reported that when a journalist visited the plant in January, about eight workers shared a cramped dorm room of roughly four bunk beds. Dorms lack hot water and showers, and in interviews with Bloomberg, factory workers described long, harsh work hours and concerns about safety issues.
An Apple spokesperson told Bloomberg it sent additional team members to audit the factory upon hearing of the CLW’s impending report. The Apple spokesperson said after interviewing 150 people, the Apple team didn’t find evidence of violation of its standards.
“We know our work is never done and we investigate each and every allegation that’s made. We remain dedicated to doing all we can to protect the workers in our supply chain,” the Apple spokeswoman added to Bloomberg.
The Catcher factory also supplies parts for Samsung, HP, Lenovo, and LG. Catcher told Bloomberg it investigated the claims, but like Apple, found no evidence.
In 2016, Apple issued a code of conduct for its suppliers to adhere to. It assess suppliers in three main categories: labor and human rights, environmental responsibility, and health and safety. On its website, Apple says it conducted 705 supplier assessments in 2016, up from 574 in 2015. Numbers for 2017 haven’t been published yet by the company.
Apple’s supplier code says its partners should “identify, evaluate, and manage occupational health and safety hazards through a prioritized process of hazard elimination, substitution, engineering controls, administrative controls, and/or personal protective equipment.”
With hundreds of business-oriented laptops to choose from — everything from sleek ultralight tablets to huge portable workstations — picking the right ones to outfit your company’s workforce can make finding a needle in a haystack seem easy. We’re here to help with a buyer’s guide that breaks the options into categories and provides pros and cons of each.
Let’s begin with the basics. Unlike consumer systems, business laptops are not meant for gaming, movies or idle web surfing — unless that’s your business. First and foremost, these systems are serious tools to help people do their jobs. In addition to sporting less garish color schemes than many consumer models, they focus on reliability and durability. Manufacturers typically sell business models for close to two years to accommodate long enterprise deployments; many promise replacement parts for five years.
Turned off by the price tags of business systems? Unlike consumer purchases, they are generally the starting point for a negotiation over cost. Most vendors offer volume discounts or the option to lease, which turns a large capital cost into a predictable monthly expense, usually at no cost premium over its life. Plus, at the end of the lease, you don’t have to worry about hardware disposal.
And with an expected three- to four-year usable lifetime, many mid- and upper-price-range business notebooks go beyond the standard single-year warranty with three years of coverage. This is often worth several hundred dollars compared to systems aimed at home users.
[ Further reading: 15 video conferencing products that are enterprise-ready ]
Based on what most large companies use, this guide concentrates on Windows systems, but in an age of workplace diversity, Apple devices are also represented. Chromebooks are also gaining traction among companies outfitting employees who don’t need peak performance: See “A new business tool: Chromebooks.”
What to look for in a business laptop
With all the hacking horror stories, security is critical in today’s business. Companies that use Windows PCs should look for systems with a Trusted Platform Module (TPM) and some sort of biometric authentication method, such as a fingerprint reader or a camera capable of Windows Hello facial recognition for password-free log-ins. Business-oriented Windows laptops should also support serious manageability features, such as the ability to tap into Intel vPro processor extensions so IT departments can remotely diagnose and service a system.
Razer isn’t afraid to float some interesting product ideas around CES each year. Over the past few years, the gaming hardware company has offered up such concepts as Project Christine, a modular desktop PC, and Project Fiona, a Windows 8 gaming tablet.
This year is no exception, though 2018’s moonshot seems a little more practical. Project Linda actually takes an idea that’s been previously developed — pairing a smartphone with a shell of a laptop to serve essentially as a dock — by companies big (from Motorola back in 2011 to HP last year) and small (crowdfunded campaigns like the Superbook and the Mirabook), though it gives it the flair that Razer is known for.
Like HP’s Elite x3, Project Linda has more style than just a laptop shell. For instance, the aluminum-clad chassis features a 13.3-inch “Quad HD” (2560 x 1440) display compared to the Elite x3’s 12.5-inch 1,920×1,080 screen. It would also come with 200GB of built-in storage to supplement smartphone storage, which other phone docks usually don’t include.
Something else that other docks don’t provide that Project Linda does is a docking area carved out of the space where a touchpad typically goes. That’s because it’s specifically designed to work with the recently released Razer Phone, the company’s high-end Android smartphone that can either serve as a touchpad or an auxiliary screen when connected to the dock.
The dock has the ability to charge the Razer Phone while it’s connected, and the keyboard has Android-specific keys for loading apps and navigating the OS. As you might expect, Razer is touting Project Linda’s ability to enhance the Android gaming experience with the larger playing screen and the ability to use a mouse to control games, though the result probably wouldn’t be as immersive as the company’s more powerful and Windows-based Razer Blade family of gaming laptops.
As with its other projects, Razer is seeking community feedback on Project Linda before it decides whether to bring the product to actual fruition. So while there’s obviously no pricing or release date for the docking system, this concept may have a better chance of coming to market as it supports an existing device in the Razer Phone and probably won’t be extravagantly expensive since it doesn’t have an expensive processor and graphics card inside. Stay tuned and we’ll report if Project Linda ever sees the light of day.
Maersk and IBM today announced a joint venture to deploy a blockchain-based electronic shipping system that will digitize supply chains and track international cargo in real time.
The new platform could save the global shipping industry billions of dollars a year by replacing the current EDI- and paper-based system, which can leave containers in receiving yards for weeks, according to the companies.
Blockchain will enable a single view via a virtual dashboard of all goods and shipping information for all parties involved, from manufacturers and shippers to port authorities and government agencies.
As an immutable, distributed ledger, blockchain technology will also improve security, according to Michael White, former president of Maersk Line in North America and CEO of the new company.
“With blockchain, the improvement in security is significant with the double encryption,” White said. “And one of the advantages of blockchain is the immutable record and trust people can have in it. If anything changes in a document…, it’s immediately apparent to all.”
Blockchain’s native immutability as a distributed ledger will also create an automatic audit trail for regulators, something with which the industry has struggled.
Along with paper legal documents, much of the international shipping industry’s information has been transmitted via electronic data interchange (EDI) – a 60-year-old technology. But once shipping manifests move to API-based technology on the new platform, shippers and everybody else in the supply chain will have more timely information and improved visibility, White said.
The blockchain technology will employ smart contracts or self-executing workflows determined by the goods being shipped and the authorization they require while in transport.
“The key point is how do you eliminate or minimize delays and how can you shorten the time people are waiting for information or documentation for cargo to move efficiently,” White said.
For example, a large export commodity such as avocados shipping from Mombasa, Kenya to the port of Rotterdam in the Netherlands can take as long as 34 days, two weeks of which involve port authorities awaiting shipping information and government document approval.
“The [shipping] documents as they’re created have to go from one customs or regulatory agency to another, often by courier on a motorbike. Once they’re signed off on, they’re put on a courier pouch, sent to destination where the broker presents it to Dutch customs who then has to validate if it’s a bonafide, original certificate…, and then, ultimately, it’s cleared and delivered,” White said.
White cited one avocado shipment that involved 30 port and government officials, and 200 pieces of communications among 100 people. “You can imagine if any one of those documents are delayed, if there are questions around validity, then the shipment will be held up,” he added.
While international shipping is a $4 trillion a year industry, and 80% of of the goods are carried on ocean vessels, much of the logistics involved in creating cargo manifests, tracking shipments and even getting sign-offs from customs and other port authorities remains a paper process.
A new blockchain based, distributed electronic ledger could save the shipping industry billions of dollars a year by replacing a current EDI and paper-based system for tracking cargo and attaining approval from customs and port authorities.
As the cost and size of the world’s trading ecosystems continue to grow more complex, the cost of the required trade documentation to process and administer goods shipped globally is expected to reach one-fifth of the actual physical transportation costs.
According to The World Economic Forum, by reducing barriers within the international supply chain through transparent, electronic communications, global trade could increase by nearly 15%, boosting economies and creating jobs.
Over the past 18 months, Denmark-based Maersk has been piloting the blockchain platform with various customers, including DuPont, Dow Chemical, Tetra Pak, Port Houston, Rotterdam Port Community System Portbase, the Customs Administration of the Netherlands and U.S. Customs and Border Protection.
Maersk and IBM’s new company must still get regulatory approval, at which time its name will be announced. But the partners expect the new electronic shipping platform to be generally availability in the next three to six months, according to White
The platform was built on IBM’s blockchain technology, which is provided through its cloud service. IBM’s blockchain is based on the open-source Hyperledger Fabric 1.0 specification created by the Linux Foundation.
“It had to be based on open standards…and a vendor-neutral platform so all other shipping lines using it could be on equal footing,” said Ramesh Gopinath, vice president of Blockchain Solutions at IBM. “That’s also the reason it’s a separate company from Maersk and IBM.”
IBM and Maersk have employed other cloud-based open source technologies on the shipping platform, including artificial intelligence, IoT and analytics in order to help companies move and track goods digitally across internal borders. Manufacturers, shipping lines, freight forwarders, port and terminal operators, shippers and customs authorities will all be able to access to the platform’s virtual dashboard on a permissioned basis.
To date, 18% of Maersk’s global containerized volume has already been entered into the blockchain application, a figure that will increase over time, according to Ramesh Gopinath, vice president of Blockchain Solutions at IBM.
“Port terminals are very interested to try to get information further upstream to understand what shipments are coming in order to determine berth availability and yard congestion to enable the cargo to move more fluidly,” Gopinath said.
Unlike bitcoin, which is based on an open blockchain where all participants can see ever data entry, IBM and Maersk’s application will be centrally administered or “permissioned.” Each participant in the blockchain will have their authority to access data limited by their individual needs, Gopinath said.
With their joint venture, IBM and Maersk will be able to commercialize and scale the electronic shipping service to a broader group of global corporations, many of whom have already expressed interest in the capabilities and are exploring ways to use the new platform. Those companies include Procter and Gamble, which is looking to streamline the complex supply chains it operates; and freight forwarder and logistics firm Agility Logistics, which hopes to provide improved customer services, including customs clearance brokerage.
When initially launched this year, the first two capabilities of the electronic platform will be the digitization of the global supply chain and paperless trade, enabling users to securely submit, validate and approve documents across international or organizational boundaries.
Blockchain-based smart contracts will ensure all required approvals are in place, helping speed up approvals and reducing mistakes.
Upon regulatory clearance, solutions from the joint venture are expected to become available within six months, at which time White said he expects goods manufacturers, shippers, ports and other third parties to participate in adding new capabilities such as invoice dispute resolution.
“There are a lot of track and trade products out there, but they’re inconsistent and there are gaps in the information. The need for real-time access to events and documentation is critical,” White said. “Having an immutable, shared audit trail is important, but the bigger value is getting access to the information and the documentation when it’s needed to prevent cargo from being delayed.”
The Windows emergency security updates issued by Microsoft earlier this month came with an unprecedented prerequisite – a new key stored in the operating system’s registry – that antivirus vendors were told to generate after they’d guaranteed their code wouldn’t trigger dreaded Blue Screens of Death (BSoD) when users apply the patches.
The demands confused customers, and fueled a flood of support documents and an avalanche of web content. Those who heard about the Meltdown and Spectre vulnerabilities struggled to figure out whether their PCs were protected, and if not, why not. Millions more, not having gotten wind of the potential threat, carried on without realizing that their PCs might be barred from receiving several months’ worth of security updates.
Here are the steps Windows users can take to insure their PCs continue to receive security updates.
Check antivirus status, update antivirus
While Microsoft hasn’t told customers which antivirus (AV) vendors have broken rules and made unauthorized calls to the kernel – the reason why the company’s patches, which modify the kernel, may provoke BSoDs when certain AV software is loaded into memory – or even tracked the progress AV vendors made toward compliance, someone has.
Security researcher Kevin Beaumont publicly posted a spreadsheet listing more than 40 of the most popular AV products, and has updated it as vendors have released updates. Beaumont’s spreadsheet indicates whether the vendor generates the registry key, is compatible with the January Windows updates, and in most cases, he provided links to the AV makers’ explanatory documentation.
Beaumont’s tracker has been invaluable to Windows users, who can use it to ascertain AV status before (or after) grabbing the latest antivirus program update, and read accompanying information.
Check the Windows Registry
The most important requirement – really, the only requirement – to receive January’s security update is the presence of the Windows registry key antivirus vendors are to create to “attest to the compatibility of their applications,” as Microsoft put it earlier this month.
Verifying that this key exists takes only moments. It’s a good idea to confirm that it’s present after scoping out and updating AV, but before applying January’s Windows update.
In Windows, launch the registry editor (Regedit.exe) by typing REGEDIT in the search box (Windows 10) or in the Run box (Windows 7). The Run box will appear after pressing the Windows key at the same time as the r key.
Approve Regedit’s launch by selecting “Yes” in the ensuing User Account Control pop-up.
The key will be within this folder: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\QualityCompat
Click on the QualityCompat folder to open it.
(To avoid have to root through layers of nested subfolders, simply copy the folder name above, then paste it into the field immediately under the menus in the registry editor.)
Inside the folder should be the key, identified as cadca5fe-87d3-4b96-b7fb-a231484277cc under the “Name” column, and REG_DWORD under the “Type” column.
If the key is there, close the editor by selecting “Exit” from the “File” menu.
Add the key manually
If the installed antivirus product didn’t generate the key – some did not initially, but most have now complied – if there’s no AV on the system, the user must set the key.
Note: Before monkeying with the registry, back it up. See this Microsoft support document for how-to info.
Use the same instructions under the previous section to launch Regedit and navigate to the folder: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\QualityCompat
Right-click the QualityCompat folder (also called a “subkey”), and choose “New/DWORD (32-bit) Value” from the menu.
In the field under the “Name” column – initially, this will read “New Value #1” – enter or copy/paste this: cadca5fe-87d3-4b96-b7fb-a231484277cc
Exit the registry editor.
Add the key with an automated tool
Microsoft may have left users to dive into the registry on their own, but others offered tools that generated the compatibility key correctly.
Trend Micro, for example, posted a download link to what it labeled ALLOW REGKEY, an archived file in .zip format. (On the page reached from the link above, look for “OPTION 1: Download and run ALLOW REGKEY.reg to let Windows receive 2018 1B update.”)
Run the tool as described on Trend Micro’s page.