Resource TaggingResource tagging is now available and integrated into the UI. One powerful and simple use here is to assign names to objects (e.g. MarchBackup), rather than only using the default resource ID value (e.g. snap-E708D3H).
So if you assign a "name" tag then it will appear here. There was a question about whether the user will be able to see the ID value. This isn't possible just yet so the UI team will look into how this might be addressed. NOTE: this doesn't mean that it'll get implemented for 3.3, just that the team will at least investigate it
Maintenance modeMaintenance mode refers to the ability to evacuate instances fron one or more Node Controllers (NC) so that the NC(s). To do this the Cloud Administrator uses the migration tool (euca-migrate-instances) to tell Eucalyptus which NC to clear out and take out of the pool. Eucalyptus then moves instances from the target node to others. Here's what it looks like from the back end.
- Cloud admin runs euca-migrate-instances against a target node
- The backend looks at the instances and tries to find a new home within the cloud
- Each instance is migrated.
- Once instances are all migrated the target node is in "maintenance mode".
Migration over VMWare too!The demo above was shown running on a KVM backed cloud. The second demo showed this working on VMWare. I highly recommend viewing the webinar replay to see this in action.
NetApp Cluster Mode SupportSwathi did a great job providing an overview of NetApp Cluster Mode's capabilities, deployment options and sample uses. She then explained the settings required to get Eucalyptus to work with this kind of rig. This part was totally new to me and was covered so fast that I found it difficult to keep up. Those with storage experience would have no trouble. Still, there were a few key take-aways I captured:
To use NetApp Cluster Mode, Eucalyptus
- Must have chapserver - Node Controller will use this to have a more secure connection to the storage device.
- Must define NC paths - the addresses the NC's will use to communicate with the backend storage.
- Must have vserver name - …I missed what this was for...
AutoScalingThis part was pretty cool and I got caught up watching the demonstration rather than taking notes. If you're already familiar with AWS Auto Scaling then you'll immediately understand how to use Eucalyptus' Auto Scaling. In fact, the configuration is the same and you can use the AWS command line tools with Euca.
CloudWatchUsers can enter just about any kind of metrics to drive cloud watch rules at - EC2 and EBS metrics. ELB and others are coming in the next sprint. All the metrics and metric definitions adhere to the AWS spec and just like the Auto Scaling, users can use the AWS command line tools to drive this Eucalyptus service.
It is helpful to note that the CW is a separate component, but currently co-located with the CLC. In the future it may be possible to deploy the separately. Pretty sure the team will wait for the user community to help decide if that's going to be necessary.
Screen shot of the metrics used in the demo:
Here are the alarm definitions (notice the first line showing use of the CW CLI).
And finally, the alarm history:
Elastic Load BalancingLast sprint we saw the basic flow of ELB, which gave us a sense of how it would function. In this spring the implementation was delivered and ready for QA. It is important to note that the ELB service is actually running as a "protected" instance within your Euca cloud. It is also possible to run multiple ELB server instances. The Cloud Admin will have the ability to start, manage, and stop the special instances. These instances will largely be invisible to Cloud Users.
With that background in mind, here are the enhancements delivered in this sprint:
- Configure Euca DNS service to map load balancer DNS name to a set of load balancer VM's public IP
- Configure and describe instance health check
- Build the ELB EMI to be included in euca install packages.
There will be two different images. One for developers one for production use. The developer image will be easy to ssh into and is a bit bigger to accommodate tools.