The PureApplication System 2.2.3 release introduced exciting developments in workload management and replication. Previously you could replicate disks from one system to another, but now you can replicate entire applications.
You can find details on how to use these new capabilities in a three-part developerWorks series on hosted VMware environments and replication written by the PureApplication engineering team.
Beginning with PureApplication version 220.127.116.11 released in September 2016, the use of IBM’s Application Performance Management monitoring is entitled for applications deployed on PureApplication System.
However, unlike IBM Tivoli Monitoring (ITM), there is currently no shared service available for automatically deploying APM agents into your PureApplication pattern instances. So you must arrange to install and configure the APM agents yourself.
But now this process is simplified! Several of my PureApplication colleagues have published an article describing how you can use script packages in your pattern to install and configure the APM agents in your pattern instances. You can find their article at IBM developerWorks.
What happens if network connectivity is lost in your multi-system deployment? Because of the variety of network communications that take place, the answer is “it depends.”
There are four different network endpoints involved in PureApplication System’s multi-system deployment:
- The virtual machine data addresses (on NIC en1/eth1)
- The virtual machine management addresses (on NIC en0/eth0)
- The PureApplication Systems’ management addresses
- The iSCSI tiebreaker address
Between these addresses, there are five different network interactions that take place. Connectivity failures in or between these networks result in different outcomes:
- Communication between all of the VMs on the deployment over their data addresses, [A to A]
What happens if this communication is broken depends on the application being deployed. It might be application-to-deployment manager traffic, or application-to-database traffic. Depending on the application, if this communication is broken the application may not be available. For example, if you have deployed GPFS mirrors across two sites, and the data communication is severed, then GPFS will still be available in one site provided that it can still access its GPFS tiebreaker. If you have deployed a WAS cluster using this GPFS mirror, then the WAS custom nodes that can connect to the surviving GPFS mirror will still be able to function provided that they can access their database.
- Management communications between the virtual machines [B to B]
- Management communications between the virtual machines and the system [B to C]
These communications are used to keep the PureApplication UI up to date with the status of the system. If these communications are broken then the application is not affected, but some of the VMs may have unknown status in the UI. Scaling the deployment will not be possible if [B to C] communications are broken on both racks.
- Communication between the systems [C to C]
- If neither system can communicate with the iSCSI tiebreaker [C to D] then externally managed deployments on both systems are frozen (no deploys, no deletes, no scaling).
- If one system can communicate with the iSCSI tiebreaker [C to D], then external deployments are not frozen on that system but are frozen on the other system.
- If both systems can communicate with the iSCSI tiebreaker [C to D], then external deployments are not frozen on one system (unpredictable) but are frozen on the other system.
- Communication between the systems and the tiebreaker [C to D]
If the systems can communicate with each other [C to C] then the tiebreaker communication is just a failsafe mechanism and it is harmless for it to experience an outage. However, if there is a double failure of communication between the systems [C to C] and also to the tiebreaker [C to D] then externally managed deployments on both systems will be frozen (no deploys, no delete, no scaling) as indicated above.
If you are running AIX on a PureApplication W3700 POWER8 system, you should pay attention to this APAR: IT14338: PureApplication System: Some AIX virtual machines have 8 SMT threads and others have 4 SMT threads per processor setting
The implications of this are that virtual machines that have been rebooted do not preserve their SMT8 setting and revert to SMT4. The fix for this issue is contained in the IBM AIX OS image beginning with PureApplication 18.104.22.168, but for any virtual machines deployed at earlier levels you need to take manual action to ensure the SMT8 setting is preserved.
You can preserve the SMT8 setting on your existing LPARs by following the instructions in this dwAnswers post: Why is SMT (simultaneous multi-thread) value set to 4 on my AIX virtual machine after VM reboot on PureApplication System W3700?
You can count how many deployments you have cumulatively made on your PureApplication System since it was first installed using the following CLI script:
virtual_systems = http.get('/resources/virtualSystems/?type=WORKLOAD')
max_id = max([x['id'] for x in virtual_systems])
print "The maximum deployment id is %d" % max_id
print "There are no running deployments"
This script takes advantage of the fact that PureApplication deployments have an internal identifier with a monotonically increasing value. This allows the script to account for older deployments that have been deleted. However, the count assumes that your most recent deployment is still active. If you currently have no deployments, it will not be able to calculate a result; or if you have deleted your most recent deployments, it will account for all deployments up to the most recent remaining deployment.
Yesterday IBM announced the new Bluemix Local System, the third generation successor to our Intel-based PureApplication System. There are three exciting new developments here:
I’m very proud of our team’s accomplishment in delivering the Bluemix Local System!
When I visited IBM customers and business partners in Bangkok and Manila last week, many of our conversations revolved around high availability and disaster recovery.
I previously contributed to an IBM Redbook on high availability and disaster recovery in PureApplication System that you can refer to as a resource. But now I’ve also completed a new overview presentation on implementing high availability and disaster recovery in PureApplication System. This presentation provides some background on HA and DR best practices in PureApplication System, summarizes the PureApplication features that serve as HA and DR building blocks, identifies a number of topologies that you can build using these building blocks, and closes with some important detailed considerations.
I hope you find this helpful as you design, implement, and test your solution!
Last week I spent time in Bangkok and Manila with the IBM PureApplication local teams and with IBM’s customers and business partners. It’s exciting to see continued growth in production workloads running on PureApplication System in ASEAN.
There were two repeated themes to many of our conversations. First was the importance of PureApplication’s pattern technology for building a DevOps pipeline that allows application and infrastructure teams to build greater confidence in the handoff from team to team between QA and production. Second was the strong interest in building high availability or disaster recovery solutions using PureApplication System. PureApplication provides many building blocks, such as GPFS and disk replication, that can serve as foundations for building HA and DR solutions.
See also: PureApplication High Availability and Disaster Recovery
My colleague Zhao Liu recently wrote an article that provides helpful background information on how System Monitoring works in IBM PureApplication System, together with detailed instructions on how to implement our recently added support for monitoring IBM DataPower virtual appliances:
Monitor DataPower virtual appliances from PureApplication System
IBM’s PureApplication products (System, Software and Service) include a sophisticated placement engine that works to manage and optimize factors such as system availability, application availability, CPU usage, and licensing.
But perhaps you’ve found yourself wondering just how, when and why PureApplication makes its placement decisions. Why did this virtual machine get migrated? Why didn’t that one get migrated? To help answer questions like these, Roy Brabson, Hendrik Van Run and I have just published an article describing in detail how virtual machine placement works in PureApplication System, Software and Service. We hope that this satisfies your curiosity!