VMware

June 06, 2013

Announcing Horizon Mirage 4.2!

VMware End User Computing

by Hanan Stein, Product Management, End-User Computing

Today, VMware is pleased to announce the launch of the latest edition of VMware® Horizon Mirage™: the Horizon Mirage 4.2 release. In Horizon Mirage 4.2 VMware has made major storage performance improvements which greatly reduces the time it takes an endpoint to finish centralization. How significant is the impact you ask? Great question! Unfortunately, the answer is that it depends heavily on your environment, but we are quite sure that it will reduce the time significantly in environments where the storage is the bottleneck.  So we could have published “X% improvement achieved in the lab” we will try to share performance improvement data from real world deployments if our customers give us permission to publish it!

What’s new in VMware Horizon Mirage 4.2?

Endpoint centralization improvements – By reducing the total number of IOPS required, Horizon Mirage 4.2 significantly reduces the time it takes for an endpoint to finish centralizing.  Large deployments with thousands of devices will notice the most improvement!

Windows Vista OS support – Horizon Mirage now supports Windows Vista for disaster recovery and Windows 7 Migration (both in place migrations and through hardware refresh.).  As with Windows XP to Win7 migration, users can continue to work as Horizon Mirage downloads the Win7 image in the background, minimizing end-user down time.

New Help Desk Web Console: Horizon Mirage 4.2 provides web portal to helpdesk personal for troubleshooting and the repair of an end user’s system.   The Help Desk Web Console offers easy access to the admin console from any browser.

Localization – Horizon Mirage client and File portal are now localized and support four new languages: French, German, Japanese and Simplified Chinese

Automated in-place Windows 7 Migration with Sophos 5.5 - Horizon Mirage can now migrate an enpoint with Sophos 5.5. endpoint encryption running without the need to de-crypt and re-crypt the endpoint.   This makes the security and compliance team very happy!

VMware Licensing Alignment:  Horizon Mirage is no longer licensed by device count but instead on the Horizon Suite user-based licensing model.

We are incredibly excited to announce our latest developments for Horizon Mirage.   Tell us your thoughts on our latest announcement via Facebook and Twitter.

by Sarah Semple at June 06, 2013 07:12 PM

Transforming IT Services is More Effective with Org Changes

VMware Cloud Ops Blog

By: Kevin Lees

My last post examined what an IT service looks like in practice. But what if you’ve only gone as far as deciding that you need to transform IT? How do you act on that decision?

Now I want to suggest some specific organizational changes that will help you successfully undertake your transformation.

At the heart of the model I’m suggesting is the notion of a Cloud Infrastructure Operation Center of Excellence. What’s key is that it can be adopted even when your org is still grouped into traditional functional silos.

Aspiration Drives Excellence

A Cloud Infrastructure Operation Center of Excellence is a virtual, or physical, team comprised of the people occupying your IT org’s core cloud-focused roles: the cloud architect, cloud analyst, cloud developers and cloud administrators. They understand what it means to configure a cloud environment, and how to operate and proactively monitor one. They’re able to identify potential issues and fix them before they impact the service.

Starting out, each of these people can still be based in the existing silos that have grown up within the organization. Initially, you are just identifying specific champions to become virtual members of the Center of Excellence. But they are a team, interacting and meeting on a regular basis, so that from the very beginning they know what’s coming down the pipe in terms of increased capacity or capability of the cloud infrastructure itself, as opposed to demands for individual projects.

Just putting them together isn’t enough, though. We’ve found that it’s essential to make membership of the cloud team an aspirational goal for people within the IT organization. It needs to be a group that people want to be good enough to join and for which they are willing improve their skills. Working with the cloud team needs to be the newest, greatest thing – it can also provide a career path for those who fear for their jobs in the “new way of doing things.”

Then, as cloud begins to become more prominent, the virtual team becomes a physical Cloud Center of Excellence team. Champions remain, interacting regularly with the existing functional groups, but the nucleus of a new organization begins to take shape – and the silos begin to crumble.

As cloud becomes the defacto way things are done, the Cloud Center of Excellence can expand and start absorbing pieces of the other functional teams. At this point, you’ll have broken down the silos, the Cloud Center of Excellence will be the norm for IT, and everybody will be working together as an integrated unit.

Four Steps to Success

Here are four steps that can help ensure that your Cloud Infrastructure Operation Center of Excellence rollout is a success:

Step 1 – Get executive sponsorship

You need an enthusiastic, proactive executive sponsor for this kind of change.  Indeed, that’s your number one get – there has to be an executive involved who completely embraces this idea and the change it requires, and who’s committed to proactively supporting you.

Step 2 – Identify your team  

Next you need to identify the right individuals within the organization to join your Center of Excellence. IT organizations that go to cloud invariably already run a virtualized environment, which means they already employ people who are focused on virtualization. That’s a great starting point for identifying individuals who are best qualified to form the nucleus of this Center. So ask: Who from your existing virtualization team are the best candidates to start picking up responsibility for the cloud software that gets layered on top of the virtualized base?

Step 3 – Identify the key functional teams that your cloud team should interact with.

This is typically pretty easy because your cloud team has been interacting with these functional teams in the context of virtualization. But you need to formalize the conneciton and identify a champion within each of these functional teams to become a virtual member of the Center of Excellence. Very importantly, to make that work, the membership has to be part of that person’s job description. That’s a key piece that’s often missed: it can’t just be on top of their day job, or it will never happen. They have to be directly incentivized to make this successful.

Step 4 – Sell the idea

Your next step is basically marketing. The Center of Excellence and those functional team champions must now turn externally within IT and start educating everybody else – being very transparent about what they’re doing, how it has impacted them, how it will impact others within IT and how it can be a positive change for all. You can do brown bag lunches, or webinars that can be recorded and then downloaded and watched, but you need some kind of communication and marketing effort to start educating the others within IT on the new way of doing things, how it’s been successful, and why it’s good for IT in general to start shifting their mindset to this service orientation.

Don’t Forget Tenant Operations

There’s one last action you need to put in place to really complete your service orientation: create a team that is exclusively focused outwards toward your IT end customers. It’s what we call Cloud Tenant Operations.

Tennant Ops is one of three Ops tiers that enable effective operations in the cloud era. It is also called “Service Ops,” which is one of three Ops teirs outlined here and here.

One of the most important roles in this team is the customer relationship (or sometimes ‘collaboration’) manager who is directly responsible for working with the lines of business, understanding their goals and needs, and staying in regular contact with them, almost like a salesperson, and supporting that line of business in their on-boarding to, and use of, the cloud environment.

They can also provide demand information back to the Center of Excellence to help with forward capacity planning, helping the cloud team stay ahead of the demand curve by making sure they have the infrastructure in place when the lines of business need it.

Tenant Operations is really the counterpart to the Cloud Infrastructure Operation Center of Excellence from a service perspective – it needs to comprise of someone who owns the services offered out to the end customers over their life cycle, a service architect and service developers who actually can understand the technical implications of the requirements. These requirements are coming from multiple sources, so the team needs to identify the common virtual applications (vApps in VMware parlance) that can be offered out and consumed by multiple organizations (and teams within organizations) as opposed to doing custom one-off vApp development.

In a sense, Tenant Operations function as the dev ops team from a cloud service perspective and really instantiates the concept of a service mindset, becoming the face to the external end users of the cloud environment.

These Changes are Doable

The bottom line here: transforming IT Ops is doable. I have worked with many IT organizations that are successfully making these changes. You can do it too.

Additional Resources

For a comprehensive look at how to best make the transition to a service-oriented cloud infrastructure, check out Kevin’s white paper, Organizing for the Cloud.

Also look for VMware Cloud Ops Journey study findings later this month, which highlights common operations capability changes, and the drivers for those changes. For future updates, follow us on Twitter at @VMwareCloudOps, and join the conversation by using the #CloudOps and #SDDC hashtags. 

by Kevin Lees at June 06, 2013 04:00 PM

PART 1: A VMware and Trend Micro Q&A: The Challenges and Benefits of Virtualized Environments for Mid-Market Businesses

VMware for Small-Medium Business Blog

by VMware and Trend Micro

According to Gartner, midsize businesses[1] now account for nearly 40 percent of U.S. server sales.[2] Unlike large enterprises, these organizations have fewer resources to support the deployment and maintenance of new servers in existing IT environments. With the goal of optimizing their entire IT infrastructure, midsize businesses are turning to virtualization. No longer limited to enterprises with big IT budgets, virtualization has proven to lower IT costs while improving IT agility. Yet as midsize organizations embrace virtualization, they must also be prepared to address specific IT risks, challenges and opportunities, such as manageability, protection against attacks, and unpatched vulnerability exploits in virtualized environments.

In this 2 part series, executives from VMware, the leader in virtualization and cloud infrastructure solutions, and Trend Micro, the global cloud security leader, address how midsize businesses can overcome common IT management and security challenges associated with newly virtualized environments.

Brandon Sweeney is Vice President of U.S. Mid-Market and Small Business for VMware.

Dave Asprey is Vice President of Cloud Security at Trend Micro.

Q&A Responses

Q:  How can midsize businesses benefit from virtualization?

A:    Brandon Sweeney, VMware: IT is critical to most midsize businesses, yet it can be challenging to deploy and manage. Virtualization software helps simplify IT. We know security, virtualization and automation are among the top IT priorities for today’s midsize businesses. We also know virtualization technology—with the right integrated operations management and security solutions—is a highly effective way to meet IT efficiency and agility goals while reducing overall IT expenses. Virtualization and automation enable executives to focus less on IT management, maintenance, deployment, downtime and security issues, and more on growing their businesses.

Dave Asprey, Trend Micro: Midsize businesses really win with virtualization because virtualization provides portability for IT infrastructure. This new agility leads to higher business continuity and better disaster recovery, but it enables new types of security and better automation of all types of business processes. Virtualization also is the gateway to building a private cloud, which brings even more benefits of virtualization.

Q:   What is the first question that arises when midsize businesses add virtualization and security to their environments?

A:    Sweeney, VMware: Like with any business investment, everyone wants to know when they will see results. Fortunately with virtualization, cost savings is one of the first tangible results. Businesses don’t have to spend nearly as much on hardware when they operate virtual machines. However, because hardware costs may not be that steep for some midsize businesses, savings may be a secondary benefit to operational savings and more reliable systems. Both CapEx and OpEx savings increase as businesses virtualize more infrastructure and their business-critical applications.

A: Asprey, Trend Micro: Julie said it right. Every IT executive wants to show the CFO the bottom line. The truth is that virtualization means you don’t need to spend as much budget on hardware and servers because virtual machines are so much more efficient. The more you virtualize, the more you save. On top of that, virtualization can provide even more benefits when you add management layers to create a private cloud. Security for virtualization can also provide simplicity. By combining many different security features into a single product, and adding an agentless option, you can manage security for an astonishing number of virtual machines from a single pane of glass administrative console. Consolidating your virtualization security into a single place provides peace of mind and cost savings.

Q:   What might prevent a midsize business from moving to a virtualized environment?

A:    Asprey, Trend Micro: Some midsize businesses delay making the move to virtualization for fear they will need to deal with a higher level of complexity and more training. It turns out that the virtualization learning curve is not that steep, and the benefits of infrastructure consolidation lead to less complexity in many cases, not more. It is substantially easier to manage a virtual server than it is a physical server, not to mention the cost benefits of purchasing a service.

Sweeney, VMware: Like Dave, I talk to executives at midsize companies all the time and some still believe that virtualization will add complexity, expense, vulnerabilities or management burdens to their existing IT environments. In reality, virtualization and automation can help simplify IT infrastructure and management. It can maintain—and even improve—existing security positions. And virtualization has proven over and over to reduce IT costs. Once skeptical companies see what other businesses of similar size and in similar industries have done with virtualization, most of their reservations are removed.

Q:   What are the real costs—initial and ongoing—of investing in virtualization? What are some of the hardware and software requirements?

A:    Asprey, Trend Micro: The bottom line is that your existing infrastructure almost certainly will support virtualization using VMware vSphere®, so additional hardware may not be required. If you choose to perform a server consolidation as you are deploying virtualization, your savings in space and power in the data center, combined with management efficiency, can easily offset some of the hardware costs. You are definitely going to want a virtualization-aware agentless security solution like Trend Micro’s award-winning Deep Security. This goes far beyond typical security software for non-virtualized servers.

Sweeney, VMware: The right approach to virtualization is evolutionary, so you don’t have to rip and replace working infrastructure. We understand that midsize and small business companies don’t have the same resources as large enterprises, so we offer VMware vSphere with Operations Managementand Trend Micro Deep Security to meet the needs of midsize businesses. Available in three editions, this joint solution gives you choice now and in the future. To preserve existing investments, it is designed so you can begin fully utilizing the infrastructure you have in place. You will typically find minimum requirements for a lightweight VMware virtualization infrastructure—which includes server, network and storage components, plus recommended software components—can be implemented on the hardware you already own. However, we do recommend you work with a local VMware partner—who can better understand your business needs—to provide specifics about the best matched virtualization solution for your organization.

Part 2 of this conversation at Trend Micro Security Blog.

If you have a mid-market business, you can learn more about virtualization at http://www.vmware.com or http://www.trendmicro.com.

Follow VMware SMB on Facebook, Twitter, Spiceworks and Google+ for more blog posts, conversation with your peers, and additional insights on IT issues facing small to midmarket businesses.


[1] Defined here as companies with 100-999 employees.

[2] Gartner. “Market Essentials Report,” February 2012.

by VMware SMB at June 06, 2013 12:53 PM

June 05, 2013

Get Started with the VMware Cloud Credits Purchasing Program Today!

VMware vCloud Blog

In today’s economy, it’s of paramount importance for businesses to be able to scale rapidly in order to meet changing business demands.  However, it may be difficult for IT to meet a business unit’s requirements because of slow provisioning times. The business will look for alternatives to ensure they are able to meet deadlines, leading to rogue spend as some public cloud providers can make it easy for the business to side-step IT, swipe a credit card and consume cloud. This can be a big problem for organizations! Rogue IT spend can create significant issues for businesses, such as increased security and compliance risk, reduced cost visibility and management overhead resulting from cloud sprawl.

However, there is a solution that increases business agility and provides for IT control – the VMware Cloud Credits Purchasing Program!

The VMware Cloud Credits Purchasing Program provides a method for you to purchase and manage public or hybrid cloud spending in one transaction. Take advantage of budgeting or project cycles to pre-purchase cloud services, and then redeem the credits when business demand dictates. Cloud Credits enable an easy on-ramp to the cloud and create a mechanism to control consumption of cloud services by the business with approved vCloud Service Providers. The net result allows your business to take advantage of cloud economics to improve business agility, while reducing IT capital expenditures on new equipment. In summary, VMware Cloud Credits give you increased budget flexibility, as well as the ability to maintain control and visibility of your cloud spend through your My VMware account.

Areas where the VMware Cloud Credits Purchasing Program can help enable your journey to the cloud:

  • Budgeting: You can purchase Cloud Credits standalone or include them in Enterprise License Agreements (ELAs), enabling you to redeem them over time.
  • Risk: Use My VMware to filter vCloud Service Providers approved for Cloud Credits redemption.
  • Control: Use My VMware to allocate Cloud Credits to the Business and report usage.
  • Compatibility: Cloud Credits are redeemed with vCloud Service Providers running the same VMware technology off-premise as you use in your data center.
  • Agility: The Business redeems Cloud Credits with fast provisioning vCloud Service Providers as demand dictates.
  • Management: My VMware provides a single-pane-of-glass for public and hybrid cloud spend through Cloud Credits alongside the already familiar perpetual license view.

So get started with Cloud Credits today! Here’s how:

  1. Work with your VMware Solution Provider to identify your public or hybrid cloud needs, and then purchase Cloud Credits in line with your requirements. Cloud Credits are available for purchase globally and can be included in ELAs.
  2. Establish Cloud Credit funds and fund owners through the My VMware portal.
  3. Consult with your Solution Provider to select an approved VMware vCloud Service Provider and determine the appropriate service requirements.
  4. Consume cloud services from your chosen vCloud Service Provider and approve the redemption of credits towards cloud usage through the My VMware portal.

Eliminate rogue IT spend and reduce cloud sprawl by taking advantage of the VMware Cloud Credits Purchasing Program today.

For more information go to: www.vmware.com/go/vmwarecloudcredits.

Be sure to follow @VMwareSP on Twitter for future updates!

by vCloud Team at June 05, 2013 05:08 PM

VMware Certification Exams 75% off During VMworld

VMware Education & Certification Blog

All VMware Certification exams, including advanced professional certifications, will be 75% off the regular price, when taken on-site at VMworld San Francisco!

  • VMware Certified Professional – Data Center Virtualization (VCP-DCV)
  • VMware Certified Professional – Cloud (VCP-Cloud)
  • VMware Infrastructure as a Service exam*
  • VMware Certified Professional – Desktop (VCP-DT)
  • VMware Certified Advanced Professional – Data Center Design (VCAP-DCD)*
  • VMware Certified Advanced Professional – Data Center Administration (VCAP-DCA)*
  • VMware Certified Advanced Professional – Cloud Infrastructure Design (VCAP-CID)*
  • VMware Certified Advanced Professional – Cloud Infrastructure Administration (VCAP-CIA)*
  • VMware Certified Advanced Professional – Desktop Design (VCAP-DTD)*

*Please note: you must have authorization from VMware before you can register for these exams.

This is a great opportunity to validate your cloud and virtualization skills – at a significant discount.

Space is limited, reserve your seat today!

by Jill Liles at June 05, 2013 04:36 PM

Webcast: VMware vSphere Data Protection

VMware vSphere Blog

VMware vSphere Data Protection has been out for quite a few months now, but there are still many who haven’t heard of it or perhaps they have heard of it and would like to find out more. If you are in either of those two groups or simply need a refresher, here is an opportunity to learn more about vSphere Data Protection and vSphere Data Protection Advanced: A webinar Thursday June 6, 2013 at 10:00 AM Pacific Daylight Time (PDT). Here is the link to attend and the session abstract:

Webinar Registration

VMware vSphere Data Protection Advanced is a new edition of VMware’s backup and recovery lineup that extends the capabilities of the vSphere Data Protection software  included with most vSphere editions. With vSphere Data Protection Advanced, midsize customers can protect their environment with a virtual appliance that scales to 8TB of deduplicated data, using agent-less image-level backups or application-aware agents for MS SQL Server and Exchange. Attend this Webcast and learn how vSphere Data Protection Advanced enables you to:

  • Dramatically reduce backup storage consumption and recovery times with a unique deduplication engine
  • Save on storage and backup costs while improving availability and operational efficiency
  • Simplify management for vSphere backup and recovery with a ”single pane of glass” solution designed specifically for seamless  integration with vSphere

@jhuntervmware

by Jeff Hunter at June 05, 2013 03:10 PM

June 04, 2013

Save 50% on BETA Course: VMware Horizon Mirage: Install, Configure,Manage [V4.0]

VMware Education & Certification Blog

VMware offers BETA courses to those wanting to participate in finalizing the near complete course. You save 50% off the course price.  Register today as BETA courses fill up quickly.

VMware Horizon Mirage: Install, Configure, Manage [V4.0]-BETA
Location: classroom delivery in Dallas TX, United States
Time:  July 9-10 @ 9:00 CDT

See course description and register today! Or see what other VMware BETA Courses are available

by Elaine Sherwood at June 04, 2013 07:44 PM

Staying Ahead in the Boom of the Mobile Workforce

VMware Consulting Blog

Today’s IT department is inundated by new devices, new applications and new ways to work. It used to be that IT defined, provided and supported the device or endpoint; they defined the refresh or upgrade cycle; they assessed, procured and installed all the applications. Users had very little influence or input into what they used at work. Today, that’s all changed.

In this 2-part video blog, Ted Ohr, Sr. Director of Professional Services and Mason Uyeda, Sr. Director of Technical Marketing and Enablement discuss the incredible explosion around end-user computing and the mobile workforce, the challenges that IT faces and what VMware is doing about it.

In this new landscape, we have users with choice, multiple devices and multiple ways for IT to approach the challenges of control vs. agility vs. cost. In Part 2, Ted and Mason highlight VMware’s IT solutions space for the customer, providing users access to the data and applications they need to get the job done

With over 18 years of technology experience, Ted Ohr is the Senior Director of Americas Service Delivery, which includes Software Defined Data Center, Mobility, Project Management and Technical Account Management. In addition to driving services revenue growth in Latin America, he is also responsible for leading all aspects of service delivery, thought leadership and best practices for VMware’s Professional Services business for both North and Latin America, helping to ensure customer success and satisfaction.
Mason Uyeda joined VMware in November 2007 and leads technical and solution marketing for VMware’s end-user computing business, bringing more than 18 years of experience in strategy, product marketing, and product management. He is responsible for the development and marketing of solutions that feature such end-user computing technologies as desktop virtualization and workspace aggregation.

by VMware Consulting at June 04, 2013 06:07 PM

Part 2: Staying Ahead in the Boom of the Mobile Workforce (VMware)

VMwareTV

In this new landscape, we have users with choice, multiple devices and multiple ways for IT to approach the challenges of control vs. agility vs. cost. In Part 2, Ted and Mason highlight VMware's...
From: vmwaretv
Views: 4
0 ratings
Time: 03:08 More in Science & Technology

by vmwaretv at June 04, 2013 04:38 PM

Take a Peek Inside VMware vCloud Hybrid Service – Register for 6/12 Webinar!

VMware vCloud Blog

Last month, we unveiled the new vCloud Hybrid Service – an infrastructure-as-a-service (IaaS) cloud built and operated by VMware that enables our customers to achieve the benefits of the public cloud using all the applications, skills and management tools they already know and trust.

Since the announcement, we’ve seen a lot of interest and questions come in from the community regarding how the new service works and its user experience. Next week is your chance to see firsthand how VMware is truly delivering on the promise of the hybrid cloud – register for our webinar, “Extending Your Data Center with the New vCloud Hybrid Service,” taking place next Wednesday, June 12 at 10am PT.

The webinar will include a demo of the vCloud Hybrid Service in action, including:

  • Creating a new virtual machine from the cloud portal;
  • Connecting your existing vSphere environment to vCloud Hybrid Service;
  • Seamlessly migrating existing workloads to vCloud Hybrid Service.

See firsthand how your organization can combine the convenience and agility of an on-demand public cloud with the freedom to run and have support for over 3,700+ applications on a trusted infrastructure onsite, offsite or both – with ease and without compromise. Register now!

For future updates, be sure to follow us on Twitter at @vCloud.

by vCloud Team at June 04, 2013 04:25 PM

The Pace of Private Cloud Deployments is Accelerating

VMware Virtualization Management Blog

In the latest 2013 Tech Target Data Center and Reader’s choice survey of over 600 responses, 15.5% of those surveyed said they planned to deploy a private cloud in 2013.  This represented almost

a 90% growth over the previous year.  In Forrester’s annual survey, they reported an increased interest in private clouds from 35% last year to 46% this year– the biggest increase in all of their cloud categories.  For more information about the growing interest in private cloud deployments, check out Beth Pariseau’s article, “Interest in Private Clouds grows as Market Matures” featured in SearchCloudComputing.com.

The increase in both interest and deployment of private and hybrid cloud deployments is consistent with the demand VMware is seeing from customers.  As companies are looking to deploy their virtual infrastructures, they continue to look at ways to accelerate the delivery of business critical IT resources while at the same time driving cost efficiencies through improvements in both operational efficiencies and resource utilization.

In the past, implementing a virtual infrastructure not only drove greater hardware utilization efficiencies, it also helped companies deploy more agile IT infrastructures where compute resources that previously took weeks to deliver could be delivered in days.  Today, the next step in the data center efficiency evolution is deploying an on-demand cloud infrastructure.  Private cloud automation removes manual, error-prone configuration processes that take days to implement and replaces it with faster, more effective compute resources and applications that are delivered in minutes.
In addition, policy-based governance delivers even greater hardware utilization efficiencies by preventing over provisioning and reclaiming inactive resources.  Implementing a private cloud helps many companies deliver similar orders of magnitude improvements in both service delivery acceleration and saving efficiencies when compared to their prior virtualization deployments.

VMware vCloud Automation Center empowers IT to transform existing compute resources into scalable infrastructure as a service (IaaS) or platform as a service (PaaS) in days.   Our customer references on VMware.com  show how we have helped companies automate the end to end delivery and management of infrastructure and application services in days using existing IT investments and established business and IT policies and processes.  To learn more about vCloud Automation Center we offer a variety of whitepapers, videos, and other resources available on VMware.com

Don’t get left behind by more agile and nimble competitors.   VCloud Automation Center helps companies improve operational and resource utilization efficiencies while reducing IT service deliver times from days to minutes. To learn more about the ROI of private cloud deployments I recommend you take ten minutes and watch Building the Cloud Automation Business Case.  VMware has helped a number of companies quantify their private cloud business justification.  Contact us to get more information and help with your private cloud.

by Rich Bourdeau at June 04, 2013 03:00 PM

Why You Should Run vSphere Enterprise +

VMware for Small-Medium Business Blog

by Mike Fegan, VMware SE, Mid-Market Team

While talking to customers the question typically comes up, “Why should I upgrade to Enterprise + licensing.”  For me the answer is pretty simple.  If you are running mission critical / Tier 1 applications inside your virtual infrastructure, you should be running Enterprise +.  Enterprise + offers the best of breed for all of your VMs that you need to ensure the greatest uptime and performance.  Don’t get me wrong, Standard and Enterprise are great products but don’t offer the extensive protection and resiliency you get from Enterprise +.

A Prime Example

A couple of weeks ago I was working with a customer that had been experiencing some performance issues with a couple of their SQL VMs.  After a bit of discussion it appeared that they had everything configured properly.  They were using the best practices outlined in the “Microsoft SQL Server on VMware Best Practices Guide” (http://www.vmware.com/files/pdf/solutions/SQL_Server_on_VMware-Best_Practices_Guide.pdf).   As far as their vSphere configuration, it was actually a very simple setup. They had 5 hosts, and approximately 200 VMs.  The network was standard Cisco Switches / routers / firewalls, etc… They were running vSphere Enterprise and vCenter Standard 5.1.  Since they weren’t on Enterprise + they were leveraging the standard virtual switches. At face value they were following the book on best practices.  2 NICs dedicated to management network,  2 NICs dedicated to vMotion,  2 NICs dedicated to iSCSI and 2 NICs dedicated to VM traffic.  After a bit of head scratching I had the customer share their desktop with me via WebEx.  Knowing that SQL can be demanding on Disk I/O I decided to dive into the iSCSI config a bit more.  I went to the network configuration for the first host to investigate the Storage vSwitch and noticed the MTU was set to 1500.  I asked the customer if the iSCSI SAN was configured for Jumbo Frames.  His response was “I don’t think so.”  The obvious next step was to verify the configuration of the NICs on the SAN.  The customer pulled up the management interface of the SAN and sure enough, the NICs were set to 9000 MTU, and they were suffering from HUGE packet fragmentation.  I recommended that the customer change the MTU on the vSwitch to 9000 as well as on the iSCSI switches.  The customer made the change and experienced a world of difference.

This Headache Could Have Been Avoided with Enterprise +

Fast forward 2 weeks, now this customer is running ENT+ and is using the virtual distributed switch. Why, you ask? The Virtual Distributed Switch has a feature called “Network Health Check” (http://pubs.vmware.com/vsphere-51/index.jsp?topic=%2Fcom.vmware.vsphere.networking.doc%2FGUID-4A6C1E1C-8577-4AE6-8459-EEB942779A82.html ).

This feature periodically examines the individual port groups to ensure proper configuration based on MTU, and VLAN mismatches.  In the case above the Network Health Check would have alerted the customer to the fact that there was an MTU mismatch on their Storage vSwitch.  They then could have taken action on it immediately rather than suffering with complaints from end-users.

How Do I Configure Network Health Check

In order to enable the Health Check feature, follow these simple steps:

  • From the Home Screen, click on “Networking”
  • Highlight the Virtual Distributed Switch
  • Click on the “Manage” tab and choose “Settings -> Health Check”
  • Click on “Edit”
  • Hit the dropdown and choose “Enabled”

That’s Not All!

In addition to the Network Health Check component, the Virtual Distributed Switch offers many advantages over the Virtual Standard Switch.  Management network stability, simpler management, network vMotion, the list goes on and on.  Who needs to use a vDS? Every vSphere user!   My colleague, and VMware Systems Engineer Chris Cousins, will be posting an extensive blog post on the many advantages the vDS provides in the next couple of weeks.  It’s sure to be a good read and I encourage everyone to check it out!

In the meantime, here is a link that explains the many benefits you gain from enabling and migrating your VMs to the Virtual Distributed Switch: http://www.vmware.com/products/datacenter-virtualization/vsphere/distributed-switch.html

How has vSphere Enterprise + helped your IT infrastructure?    I look forward to your comments.

Mike Fegan

Follow VMware SMB on Facebook, Twitter, Spiceworks and Google+ for more blog posts, conversation with your peers, and additional insights on IT issues facing small to midmarket businesses.

by VMware SMB at June 04, 2013 02:58 PM

Question of the Week: VCP5-DCV

VMware Education & Certification Blog

This week’s “Question of the Week” comes from the VMware Certified Professional 5-Data Center Virtualization (VCP5-DCV) Official Study Guide.


Which of the following is the largest extent that you can have in a VMFS-5 datastore?
a. 2TB minus 512KB
b. 4TB
c. 64TB
d. 256GB

See below for the answer


Not sure of the answer? You can learn more about this topic in our VMware vSphere: Install, Configure, Manage course.

Answer: c. 64TB

by Angela Guzman at June 04, 2013 01:50 PM

June 03, 2013

Hands-On Labs 2013, Part 1

VMware vSphere Blog

The New Guy

I am privileged to be a new addition to the Hands-on Labs team within Technical Marketing at VMware. I have been here just under 3 months, but I have been using our products for almost as long as we have had products, I am active in the community and have spent time in the field as both a customer and a partner. I hope that my background allows me to provide a unique perspective on what we do in my group. There were some things that I always wanted to know, and I’d like to share as much of that with you as I can.

As the new guy on this team, I have spent quite a bit of time understanding how the Hands-on Labs infrastructure is set up, where resources are located, and how we deploy labs to support various conferences, user groups, and the new 24×7 online activities.

Hands-on Labs Online

If you don’t know about the free HOL Online portal, you should stop reading and go sign up for an account right now. Seriously, point your browser to http://hol.vmware.com/ and get an account. Now. It’s in Public Beta and we’re signing people up continuously. Request an account and you should have one soon. Of course, I would appreciate it if you decided to come back here after you sign up.

Hands-on Labs @ VMworld

Most people know about the labs at VMworld. In fact, the labs have consistently been one of the highlights of the show for a large portion of attendees. I had been involved with the labs in the past as a content creator and presenter, so my new role is something that I am really excited about.

Several years back, we had two different types of labs: instructor-led labs and self-paced labs. Each type had its own benefits and drawbacks. For example, instructor-led labs were like classes and attendees had access to the people who actually developed the lab content because they tended to lead those sessions. Unfortunately, it was often very difficult to get into the lab sessions that you wanted because there were limited seats and sessions available. The capacity issue was addressed by the self-paced labs, and enhanced by a slick provisioning system that allowed any self-paced lab to be taken from any station in the pool. The drawback was that the people who created the labs were not always available when you wanted to take the lab.

As long as the labs were available, most attendees didn’t seem to mind, especially if we could direct them to a session where the lab’s topic was covered in more depth. However, our goal is to have as many subject-matter experts in the lab area as possible to answer questions as they arise. At conferences, we do our best to schedule lab resources so that one of the content contributors for each lab is on the floor during all posted lab hours.

Our current Hands-on Labs offering has evolved from this self-service model. At VMworld US 2012, we debuted our first BYOD capability. We expect to enhance that capability this year and provide several different types of lab experiences in addition to a whole batch of fresh, VMworld-exclusive content. It is no secret that we experienced some challenges with the labs at VMworld last year. We have listened to your feedback and made some changes. I firmly believe that your lab experience this year will be more satisfying.

Cloud!

When I think about it, even though our use case is somewhat unique, what we are doing here has many of the characteristics of “cloud”:

  • On-demand Self-service: Anyone with an HOL online account can sign in and experience a lab. The environments are provisioned on-demand and presented for use. (Well, technically, we maintain some pre-deployed instances of each lab in order to save you time. This works similar to the way that VMware Horizon View pools work and is handled by our front-end application.)
  • Measured Service: We don’t charge for this service, but the “cost” could be measured in minutes: when you enroll in a lab, you get to use the environment for set amount of time. When that time expires, your environment goes away. You can get a new one as many times as you would like, but nothing persists beyond the allocation.
  • Leveraging Pooled Resources and Rapid Elasticity: Our labs are designed to be self-contained, deploy quickly, run for a finite period, then disappear. This is an incarnation of what I like to call the “Paper Towel” use case: need it, get it, use it, toss it. We deploy known, fixed blocks of capacity, which have been designed for a specific use case.

For our use case, availability is important, but not in the traditional sense:

  • There are specific times during the year when we need 100% availability. At the VMworld and Partner Exchange conferences, attendees want the labs available during all of the hours that the labs are open — and more!
  • The remainder of the year, people accessing our online portal would be inconvenienced if their lab disappeared due to a backend issue. However, it wouldn’t kill them to re-enroll and get a new copy. As long as we design for this kind of failover (i.e. not preserving any state), we are fine: we’re not running a reactor, mail server, or performing brain surgery here.

From a design and capacity perspective, we have to design for steady usage with some pretty massive spikes:

  • Typical usage of labs via the HOL online portal’s public beta period averages 600-700 per week. We typically have 6-10 people taking labs concurrently unless there is an event of some kind going on. We have roughly 60 pre-staged copies of various labs deployed and waiting for you to use, with over 1000 VMs deployed within the tenant that services the HOL online portal.
  • As for spikes, during a conference, we deploy and destroy an average of 8,000 VMs per hour!

What about Hardware?

All of that cloud stuff is well and good, but I’ll bet many of you want to know what’s behind the curtain: what gear do we use to run this environment, and what does this look like. I will have to save that for another post.

by Doug Baer at June 03, 2013 08:01 PM

vCloud Networking and Security 5.1 App Firewall Best Practices

VMware vSphere Blog

This blog provides best practices for deploying vCloud Networking and Security 5.1 App Firewall. Thanks to Shubha Bheemarao, Ray Budavari and Rob Randell for helping me in compiling this.

Installation

  • Install vCloud Networking and Security Manager (aka vShield Manager) on a dedicated management cluster. Other components that get installed on this cluster are VMware vCenter Server, vCloud Director etc.
  • vCloud Networking and Security Manager should be run on an ESXi host that is not affected by downtime, such as frequent reboots or maintenance mode operations. Use vSphere HA to increase the resilience of the Manager. Thus, a cluster with more than one ESXi host is recommended.
  • Install vCloud Networking and Security App Firewall on all vSphere hosts within a cluster so that virtual machines remain protected as they migrate between vSphere hosts.
  • The management interfaces of vCloud Networking and Security components should be placed in a common network, such as the vSphere management network. Manager requires IP connectivity to the vCenter Server, ESXi host, and App Firewall virtual machine. Refer the KB article for the network port requirements for vCloud Networking and Security. It is a best practice to separate management traffic from the production traffic.
  • If the vCenter Server or vCenter Server database virtual machines are on the ESXi host on which you are installing App Firewall, migrate them to another host before installing App Firewall or exclude these virtual machines from vCloud Networking and Security App Firewall protection.
  • Install VMware Tools on each Virtual Machine. The vCloud Networking and Security Manager collects the IP addresses of virtual machines from VMware Tools on each virtual machine. Use App Firewall SpoofGuard to authorize the IP addresses reported by VMware Tools to prevent spoofing.  With SpoofGuard use trust on first use to reduce the administrative overhead.

Firewall Policy Management

  • Use vCenter containers (vApps, resource pools, port groups, etc.) and security groups (grouping of vApps, resource pools, port groups, vNICs etc.) instead of IP addresses for policy enforcement. This allows creating security policies that can follow virtual machines during the vMotion process, and are completely transparent to IP address changes and network renumbering. In addition, use of vCenter containers and security groups enable rules to be dynamic. When a new virtual machine joins the container or security group, the rules are applied automatically and not required to define new rules.
  • Use service groups to combine multiple services to reduce the number of entries in the rule table.
  • Ethernet rules control which higher-level protocols (like ARP, IPv6, PPP and so on) can communicate over L2. By assessing what communication is required between applications and each tier of the application, create Ethernet rules that block all unnecessary traffic with a default any to any allow at the end.  Ethernet rules are enforced before the General rules. When a packet is allowed by an Ethernet rule, it will be further inspected by General rules. When a packet is denied by Ethernet rule, General rules are not evaluated.
  • General rules control the specific L3 traffic based on IP addresses, as well as L4 traffic based on TCP and UDP ports. Explicitly add rules to allow the communication required between applications and each tier of the application, with a default any to any deny at the end.
  • Set a firewall rule to have L2 isolation between servers in a security group when applicable e.g. isolate one web server from another web server. This can prevent the spread of malware when one machine gets infected and provides PVLAN like capability, but more easily managed. App firewall provides better security than PVLANs particularly when combined with SpoofGuard.
  • Set the Fail Safe mode to Block – in the remote event of App Firewall service virtual machine failure, this setting ensures to block the traffic to all virtual machines running on the host preventing any security vulnerability.
  • App Firewall protects applications within the virtual datacenter, whereas Edge Firewall provides protection at the perimeter of the virtual datacenter. In multi-tenant deployments, each tenant would have separate Edge devices.  For firewalling between virtual machines within the same tenant use App Firewall. Whereas, for isolating traffic between tenants use Edge Firewall.
  • In a multi-tenant deployment, App Firewall allows you to assign independent IP addresses to specific port groups. You can mark a port group as an independent namespace, and then the datacenter level firewall rules no longer apply to that port group. This is done automatically for VXLAN virtual wires i.e. a separate namespace created for each VXLAN network.

Day to Day Operations / Troubleshooting

  • Regularly monitor the allowed/denied flows using Flow Monitoring to ensure that firewall rules are set up correctly. Use Flow Monitoring to audit network traffic, define and refine firewall policies, and identify threats to the network.
  • Setup syslog servers on App Firewall for central logging. Enable logging on a per rule basis to send the Allow/Deny syslog messages to central syslog server. Use ‘Rule ID’ in the App Firewall rule table to correlate the syslog messages with the corresponding Firewall rules.
  • Setup NTP to ensure accurate timestamps on log messages. All App Firewall instances use the NTP server configuration of the vCloud Networking and Security Manager.
  • Use the comments field in each App Firewall rule to keep track of the changes.
  • Use the Load History option to revert the vCloud Networking and Security App firewall configuration to a previous version. vCloud Networking and Security Manager saves the App firewall configuration each time new firewall rules are published and retains the previous ten configurations.
  • Schedule periodic backup of vCloud Networking and Security Manager data, which can include configuration, events, and audit log tables.
  • After creating a full backup of the vCloud Networking and Security Manager Database, shut down the virtual machine then take a snapshot or full clone of the virtual machine prior to any upgrades. Refer the KB article for additional information.

Get notification of these blogs and more vCloud Networking and Security information by following me on Twitter @vCloudNetSec.

by Ranga Maddipudi at June 03, 2013 07:00 PM

2 Ways to Save on Training in June

VMware Education & Certification Blog

You still have time to take advantage of two hot discount programs running in the US and Canada for VMware direct-delivery training classes.

Don’t miss out on this opportunity to expand and validate your cloud and data center virtualization skills. But, hurry – both of these discounts end June 30.

by Jill Liles at June 03, 2013 05:06 PM

Giving PCs New Purpose

VMware End User Computing

By: Courtney Burry, Director of Product Marketing, End User Computing, VMware

Simplifying desktop management is still a very compelling reason for organizations across the globe to make the move to desktop virtualization. And not surprisingly, many organizations seem to make this move when faced with an upcoming or imminent PC refresh. Why? Because they can take the money that would typically go into buying a whole new fleet of PCs and instead invest this money into virtual desktop infrastructure. PCs can be repurposed to run as thin clients and given a new lease on life-usually with better performance. And IT organizations can focus on improving data security, supporting workplace mobility and driving down the day to day costs of desktop management.

Information Age highlighted a great example of this with Hertz-the global car rental company, earlier this week. Faced with the need to improve operational efficiencies, Hertz had the choice of refreshing its fleet of 4000 PCs and laptops spread across over 1,000 sites across Europe or moving to desktop virtualization.

By moving to desktop virtualization with VMware® Horizon View™, Hertz has been able to improve PCI compliance and security, reduce operational costs (help desk incidents alone have dropped by 33%) and simplify technical infrastructure.

Not surprisingly, Hertz expects to save a lot of hardware investments in the coming years by extending desktop lifecycles from three to five years up to 10 years with the use of Dell Wyse thin clients.

Western Wayne-a school district out in Pennsylvania, is another really good case in point. The district received a “classrooms for the future” grant a couple of years back. Instead of putting the money into new laptops-the IT department opted to build out their virtual desktop infrastructure and move to thin clients. And while the district spent a good chunk of their grant on getting their virtual desktop project off the ground in year one-by year two they were seeing real savings. In fact-they were even able to take some of these savings and offer up funding to the art and music departments for new technology purchases.

Now if you can’t repurpose PCs that you own in the process of moving to VDI- certain organizations have also shown that you can repurpose the PCs of your partners instead…

Facing significant budget cuts, the Iowa Workforce and Development Agency, one of Iowa’s largest state agencies-was actually asked to close over half of its 55 offices. Still intent on providing agency services-it partnered with other organizations that had PCs on hand for public use-including other state agencies, public libraries, National Guard Offices and colleges and leveraged virtual desktops to drive down costs by $6.5M, enhance security and reach more people than ever before. And it didn’t matter that partner PCs were older since the PCs were repurposed to run as thin clients. Today the agency has over 1500 virtual desktops running across all 99 counties in the state.

Looking to replace your fleet of desktops, laptops or tablets this year? You may just want to take a look at desktop virtualization and repurpose those PCs instead. :)

Are you considering desktop virtualization?  Tell us all about it on Twitter and Facebook!

by Sarah Semple at June 03, 2013 04:00 PM

Top 20 Articles for May 2013

VMware Support Insider

Here is our Top 20 KB list for May 2013. This list is ranked by the number of times a VMware Support Request was resolved by following the steps in a published Knowledge Base article.

  1. Downloading and installing VMware Fusion (2014097)
  2. Uploading diagnostic information to VMware (1008525)
  3. Collecting diagnostic information for VMware ESX/ESXi using the vSphere Client (653)
  4. Installing async drivers on ESXi 5.x (2005205)
  5. Installing Windows in a virtual machine using VMware Fusion Easy Install (1011677)
  6. Broadcom 5719/5720 NICs using tg3 driver become unresponsive and stop traffic in vSphere (2035701)
  7. Troubleshooting Fusion virtual machine performance issues (1015676)
  8. Repointing and reregistering VMware vCenter Server 5.1.x and components (2033620)
  9. vSphere handling of LUNs detected as snapshot LUNs (1011387)
  10. Unmounting a LUN or Detaching a Datastore/Storage Device from multiple ESXi 5.x hosts (2004605)
  11. Creating a persistent scratch location for ESXi 4.x and 5.x (1033696)
  12. Installing or upgrading to ESXi 5.1 best practices (2032756)
  13. Purging old data from the database used by vCenter Server (1025914)
  14. Upgrading to vCenter Server 5.1 best practices (2021193)
  15. Installing VMware Tools in a Fusion virtual machine running Windows (1003417)
  16. Cannot log in to vCenter Server using the domain username/password credentials via the vSphere Web Client/vSphere Client after upgrading to vCenter Server 5.1 Update 1 (2050941)
  17. Accessing VMware downloads (2006993)
  18. Collecting diagnostic information for VMware vShield Manager (1029717)
  19. Manually deleting linked clones or stale virtual desktop entries from VMware View Manager 3.x and 4.0.x (1008658)
  20. Recreating a missing virtual machine disk (VMDK) descriptor file (1002511)

by Rick Blythe at June 03, 2013 02:41 PM

Hidden benefits of virtualisation – uneven hardware

VMware Technical Account Manager Blog

By guest blogger, Christian Wickham, Technical Account Manager, South Australia and Northern Territory, and Local Government and Councils in Western Australia, Victoria and New South Wales at VMware Australia and New Zealand

Hidden benefits of virtualisation – uneven hardware

Within VMware we are often focussing on the latest and greatest features and capabilities offered by all our newest software. Of course, we are always driving forward and the next version’s enhancements and benefits are forefront of our minds – but there are still some people out there who are just starting on their virtualisation journey. The advantages offered by our premium versions of vSphere such as Enterprise Plus, and the vCloud Suite editions, offer exceptional advances for businesses and enterprises, but some smaller businesses are unable to afford these editions – particularly at the start.

Some benefits of virtualisation, particularly with vSphere, are inherent and included in all versions – and deliver significant savings in both money and time. In this series, I will outline some of the simple benefits that are often not highlighted to new users of virtualisation, but well known to existing users.

It is definitely a trend within the hardware industry to develop servers that are optimised for virtualisation. High memory density, multiple built in network cards, support for the latest multi-core CPUs – and many other enhancements. It’s actually quite hard to buy a good quality server that has just 2 CPU cores and 4 GB of RAM and 40GB of fault-tolerant disk space and a single network card – often this is the requirements of server software for small and medium businesses.

Interestingly, in my experience of working with VMware customers (and my history of being a VMware customer for 5 years too), some server software actually consumes less resources than even that! It’s common to see a Windows 2008 R2 server actively using less than 256Mb of RAM, 100 MHz (that’s 0.1 GHz) of CPU and, after the installation of Windows, less than 5GB of disk space. Why don’t you try and buy a physical server with those specs – you can’t! In discussions that I have had with software vendors, they often “bump up” their official minimum hardware specifications to the level of a mainstream standard server, because customers keep on contacting them to ask if their ‘powerful’ server is appropriate.

Based on proper analysis (such as through vCenter Operations Manager – vC Ops), or even careful manual ad-hoc analysis of the vCenter performance statistics over a reasonable time, it might be apparent that your servers are over-sized. The recommendation might come back from vC Ops that your servers should have 384Mb of RAM, or 3 CPUs. Unusual sizes? Not with vSphere. You can set odd numbers of CPUs (uneven, not ‘strange’…) and memory sizes in increments less than 1GB.

There is a tiny drawback here though. If you need to resize your VMs downwards, the operating system would get quite a bit upset if you pulled out memory or CPUs whilst they were in use – that’s why you are prevented from doing it in the vSphere client(s). Instead, you need to power down the VM, make the changes, and then when it powers on, the new specifications take effect.

The upside is, if you have vSphere 5.1 Standard or above, you have hot-add of CPU and memory (vSphere 4.x and 5.0 needs Enterprise or above). This needs to be activated on each VM (version 7 or later) whilst it is powered off (so we recommend you set this on your templates), and in usual VMware fashion this is a single mouse click GUI option to ‘enable’. Depending on your Windows edition, the memory will be immediately accessible (2003 Enterprise, 2008 Enterprise, 2008R2, 2012), and the CPU will be immediately accessible (2008 R2 Enterprise, 2012), or require a reboot (for RAM; 2008 Standard, for CPU; all 2008 and 2008 R2 Standard). For Linux flavours, hot add varies depending on your distro – some recognise the new hardware immediately and some require kernel commands to recognise the additional CPU(s), or may require a reboot.

How much can you add? It depends upon your license – for vCPU per VM from 8 in the Essentials all the way up to 64 vCPUs per VM in Enterprise Plus. Memory can be added up to 1TB (or 1,048,576 MB if you would prefer). However – you can’t give an individual VM more virtual CPUs than you have physical CPU cores, and you can’t add more RAM to a VM than you physically have inside the host server. However, as I mentioned above, smaller specifications are often what is needed in most applications used in medium and smaller businesses.

So, we have covered CPU and RAM being assigned in “unusual” numbers, and the ability to assign these very low and then add to it whilst the VM is running as your needs grow. What about network? What about disks?

New disks can be added to a VM at any time, and depending upon the installed operating system, they will be recognised as an unformatted disk, ready to be initialised and used. In older versions of Windows and Linux, you may require a scan for new disks. It only takes a second or two for new disks to be added to the VM – and they can be specified in megabytes up to 2 Terabytes. You can also resize an existing disk, and when you do this it is seen by the operating system as unallocated space. With Windows 2003, you cannot resize the boot disk (C drive), or any disk containing a pagefile, but you can resize data disks. In newer versions of Windows, you can resize all disks – but with the same restrictions of CPU and RAM in that you can add, but not take away.

There are some restrictions on maximums with disks too. You can only add an individual virtual disk of up to 2TB (minus 512 bytes) in vSphere version 5.1 and below, and you can only have a maximum of 64 of these disks (assuming 4 are IDE and 60 are SCSI). You can also directly attach a disk from a SAN to a VM as a raw device mapping (RDM), up to 64 TB. However there are many reasons why again you should keep the number of virtual disks and their sizes smaller.

Network cards can be added whilst a VM is running too. Start with a single NIC and if you need a second one in the operating system, these can be added – up to 10. In practice, unless you are using your VM as a router or have other application reasons why you need multiple virtual NICs, one is often enough. Additional bandwidth and redundancy can be added at the physical host layer.

If you are new to virtualisation, or have never tried running a virtual machine with an “uneven” virtual hardware configuration – give it a try and prove to yourself (and your colleagues) that servers do not always need to have 2 or 4 or 6 (CPUs or GB of RAM). Next time you are purchasing software that has a minimum requirement, have a look at your trial or evaluation of the software and see what it is actually using – this might be a start to a whole new density improvement in your vSphere environment.

If you have got this far and are thinking, “what about large enterprises?” then you need to consider the overall density that sizing your VMs correctly can achieve. If you have a density of around 30 VMs per host, then even reducing each VM by 100MB can release (in this example) a further 3GB.

by Neil Isserow at June 03, 2013 02:20 AM

May 31, 2013

Skating your way to the SDDC

VMware vSphere Blog

This week I was reminded of that great Wayne Gretzky quote,

“I skate to where the puck is going to be, not where it has been”.

How is that relative to the Software Defined Data Center (SDDC)? Well, because things are moving so fast! That virtualization infrastructure you have today (thank you for my paycheck!) is introducing new challenges in IT and Security management. What was once a few servers, some network and storage and a firewall is growing into hundreds, if not thousands of VM’s, hybrid clouds, tiered storage and stretched networks. There are new tools to learn and new innovative capabilities to leverage.

But it’s getting very complex!

Yes. It is. Every new technology seems complex at first. Every new technology brings benefits and challenges. (Remember the pre-PC era? I do!) But, here’s the good, no, AWESOME part, it’s becoming increasingly easier to automate, validate and assess.  However, if you are still managing and securing this new infrastructure using your old methods, you may find yourself skating to where the puck was and not where it’s going.

Here’s a slide that I’ve been using in my current deck for a while now.

image

Eliyahu Goldratt, who I recently discovered after I built the slide, was a business management guru. In one of his books, he had two guys talking about some new technology that was being installed. In it, one of the characters says

“…technology is a necessary condition, but it’s not sufficient. To get the benefits we must, at the time that we install the new technology, also change the rules that recognize the existence of the limitation. Common sense.”

If you are applying your existing rules, which WILL impose limitations, how can you be assured of getting the benefits of this new technology? The software defined datacenter is changing the rules. Virtualization already has. Have you re-examined your rules? Are you doing security any differently? Don’t worry, you’re not alone, many haven’t. :)

Existing rules. a.k.a. How NOT to do it

Let’s take, as an example, changing a setting on all your VM’s. Let’s say you want to disable the ability to have vCenter auto-install VMware Tools (for whatever reason). Now, according to some security folks, that would mean doing the following steps:

  1. Un-register the VM from vCenter
  2. Connect via the Datastore Browser
  3. Download the .VMX file
  4. Edit the file and make the change
  5. Upload the .VMX file
  6. Re-register the VM to vCenter

That’s not what I would define as a software defined anything. :) That is a process that is fraught with potential errors and security issues. Plus, from a compliance and general security standpoint, how do you assess if it was done or done right? Really, it’s crazy and makes my brain hurt. If the rules have the potential to make you less secure, the rules are broken!

Unfortunately, something like this is called out in a government standard (surprise!) as the required way to do a similar task. Obviously, they have not yet recognized the existence of the limitations.

New rules can benefit everyone

Is there a better way? Yes, you can leverage an IT tool to do this. vCenter has a VERY rich API. In the example above, those steps can be done in a couple of lines of PowerShell thanks to my teammate Alan Renouf and the vSphere Hardening Guide! Note that you can do similar scripting with other scripting languages as well.

Note: These use new cmdlets became available in PowerCLI 5.1 Release 1. PowerCLI 5.1 is now up to Release 2 at the moment.

# Add the setting to all VMs
Get-VM | New-AdvancedSetting -Name “isolation.tools.copy.disable” -value $true

Want to assess what the setting is across all VM’s?

# List the VMs and their current settings
Get-VM | Get-AdvancedSetting -Name  “isolation.tools.autoInstall.disable”| Select Entity, Name, Value

No editing. No de-register/re-registering of VM’s. No leaving copies of VMX files on a desktop. Easy to control, assess and audit. Plus, it’s all done in seconds against all virtual machines rather than days of cumbersome clicking. Want a report on what VM’s are set? Outputting the results to a .CSV file is as simple as adding

| out-csv filename.csv”.

This kind of information becomes valuable to the security guy! Not only that, it can be easily baked into how you do business and even better, put under version control for further alignment with compliance objectives. This is the software part of SDDC. The ability to lessen the time it takes to get things done and do it more efficiently and in an easily measured and assessed fashion.

Find a way to change the rules together

When I meet with customers, I’ll ask if the IT and Security teams  have the resources (e.g. developers) that can assist them with automating the datacenter. Unfortunately, many don’t. It’s not on their radar because they are so wrapped up in fighting fires that process improvement and redefining the rules fall by the wayside.

I would urge both IT and Security to find a way out of that loop. Skate to where the puck is going.

Leveraging the infrastructure capabilities is KEY to a software defined datacenter. This means it’s time to consider having a person or persons on your IT team dedicated to writing code will allow you to enjoy the benefits of the technology. Become knowledgeable about the growing DevOPS movement. I’m exploring it through the lens of security and I’m really, really excited! I’ll share what I find with you in the coming year.

Remember, working with your security team and introducing them to a more efficient way of helping them get their job done not only helps them, it helps IT and gets you both in a better place to get the most out of the technology you purchased.

The payoff of better IT Operations and in turn, MUCH better security, will be well worth it. Position yourself to benefit from technology. Change the rules and start skating!

mike

by Mike Foley at May 31, 2013 06:29 PM

The Illusion of Unlimited Capacity

VMware Cloud Ops Blog

By: Andy Troup 

I was at a customer workshop last week, and I used a phrase that I’ve used a few times to describe one of the crucial capabilities of a successful cloud computing service, namely “The Illusion of Unlimited Capacity.” It got a bit of a reaction, and people seemed to understand the concept quite easily. So apart from its sounding quite cool (maybe I should get out more), why do I keep on using this term?

Well, in cloud computing, we all know that there is no such thing as unlimited capacity – everything is finite. Every cloud provider only has a limited number of servers, a limited amount of storage capacity, and a limited number of virtual and physical network ports – you get the idea, it’s all limited, right?

Paradoxically, though, providers of cloud resources have to make sure their customers believe the opposite: that there is no end to what can be consumed.

The National Institute of Standards and Technology (NIST) defines one of the characteristics of cloud computing as on-demand self-service; i.e. the user can consume what they want, when they want it. Now, for cloud providers to provide on-demand self-service, they need to be confident that they can fulfill all the requests coming from all their consumers, immediately. They need to maintain, in other words, an illusion of unlimited capacity.

If at any point a consumer makes a request, and the cloud portal they use responds with a “NO” because it’s run out of cloud resources, this illusion has gone. That has real consequences. As it is very easy for consumers to move between cloud providers, it’s very likely that the provider will have lost them as customers and will find it very hard to get them back. Remember, even for internal IT cloud providers, it’s a competitive market place and the customer is king.

So, when defining your cloud strategy, you want to make sure that maintaining ‘the illusion of unlimited capacity’ is on your list. It may not be something you need to consider initially, but when demand for your services increases, you need to be ready to deal with the challenge. To prepare for it, here are 5 things you should start thinking about:

  • Understand your customers – build a strong relationship with your customers, understand their business plans, and use this information to understand the impact those plans will have on the demand for your cloud services.
  • Implement the appropriate tooling – so you can not only understand demand for your cloud capacity today, but also forecast future demand.
  • Consider the Hybrid Cloud – think about how you would burst services in and out of a hybrid cloud and when you would need to do it. Before you actually need to do this, make sure you plan, prepare and automate (where possible), so that everything is in place when it’s needed. Don’t wait until it’s too late.
  • Train users on service consumption etiquette – if they know they can get what they need when they need it, they will be less inclined to hoard resources. And if they aren’t hoarding resources, the pressure to predict their future demand (which can be difficult) will be reduced, because resources are being used more efficiently. Why not agree that they won’t have to plan capacity if they “turn it off” when done, thus freeing resources back to the pool and further increasing spare capacity.
  • Kill zombie workloads – be aware of services that aren’t being used and turn them off (after having a conversation with the customer). Also, encourage the use of leases for temporary services when appropriate.

Finally, going back to the essential characteristics of cloud computing as defined by the National Institute of Standards and Technology (NIST) (here is the very short document for those of you that haven’t read it), one other characteristic is rapid elasticity.

If you think about it, this article is really all about rapid elasticity. It’s just another way of saying that you need to maintain the illusion of unlimited capacity. Now, put on your top hat, hold on to your magic wand, and keep the illusion going.

For future updates, follow @VMwareCloudOps on Twitter and join the conversation using the #CloudOps and #SDDC hashtags.

by Andy Troup at May 31, 2013 05:23 PM

Solution to a “Mostly Cloudy” Problem

VMware vCloud Blog

This is a guest post from vCloud Service Provider, Logicalis.

By: Steve Pelletier

Have you ever had an IT project that you thought would be ideal to put into a public cloud except for one or two requirements that cloud providers just can’t seem to meet?  I like to refer to these as “mostly cloudy” projects.  As Logicalis has developed its public cloud infrastructure, we’ve had many customers approach us with just this type of project.  An increasingly large proportion of those clients are ISVs who want to focus on their software development and move the hosting of their platform to the cloud.  But there’s an important problem preventing them from doing just that: public clouds typically don’t offer any custom options.

Most public cloud providers believe that standardization is the only way to provide cost benefits on a large scale.  Standardized hardware, automation, management levels – all of these standard tools and functions and more are typically put in place to make a public cloud environment both very efficient and highly repeatable.  Typical public clouds can’t be all things to all people, but they strive to be most things to many.  The problem is, this leaves those “mostly cloudy” projects with no place to turn in the public cloud.

At Logicalis, however, we take a much more consultative approach to everything we do, including our public cloud infrastructure.  We’re working to accommodate these kinds of mostly cloudy projects by deploying dedicated cloud environments.  By applying the same managed services practices that are used in remote management scenarios, Logicalis can provide the appropriate dedicated hardware and management levels as well as the custom requirements many clients need – things which would otherwise have prevented the solution from being hosted in a public cloud.  Logicalis can incorporate custom hardware requirements, higher SLAs, and enhanced security requirements, as well as other customized functions such as allowing a client access to the underlying virtualization and physical layers.  At Logicalis, we’re using our experience running a VMware-based public cloud – the Logicalis Enterprise Cloud – as well as our considerable expertise as a managed service provider to accommodate many of the unique requirements that traditional public clouds simply can’t deliver.

This makes Logicalis’ public cloud and dedicated cloud solutions, built on a VMware platform, ideal for ISVs that are looking to move their applications into the cloud.  Logicalis can provide a robust environment that takes advantage of all of VMware’s functionality to meet the unique requirements of providing a SaaS-based solution.  This allows ISVs to focus on the development and support of their applications without having to worry about the underlying infrastructure.

If the forecast for your project is “mostly cloudy,” a dedicated public cloud solution may be the answer.

Steve Pelletier is a solution architect for Logicalis US, an international IT solutions and managed services provider (www.us.logicalis.com).

by vCloud Team at May 31, 2013 04:00 PM

No neckties in the paper shredder: Horizon Mirage Branch Reflectors

VMware End User Computing

By Tina de Benedictis, Senior Technical Marketing Manager, End-User Computing, VMware

No neckties in the paper shredder—what does that have to do with VMware Horizon Mirage Branch Reflectors? By the end of this blog post, you will know.

no-necktie-horizon-mirage-branch-reflectors

You have probably noticed those words or an icon on the paper shredder that indicate you should not put neckties in the paper shredder. Who would put a necktie in the paper shredder? It might be someone who was not paying enough attention and let their necktie dangle into the shredder, or it might be someone who hated that particular necktie and thought the paper shredder was the right place to demolish it.

This is where we find the similarity to Horizon Mirage Branch Reflectors. Branch Reflectors are for efficient handling of layer updates coming down from the datacenter to endpoints, not for backups of endpoints going back up to the datacenter. Who would think that Branch Reflectors are for backups? A lot of people do, and they are surprised to find out that they need to think about their WAN instead of the LAN when planning backups of branch-office endpoints.

horizon-mirage-download-layers-upload-user-changes

Figure 1: Layer Updates and Backups of Endpoints in a Horizon Mirage Deployment

To understand the purpose of Branch Reflectors, you need to understand the flow of layer updates and backups in a Horizon Mirage implementation. In this diagram, you see that IT sends down layer updates to endpoints over the WAN. These IT-managed layers are the base layer (with the operating system) and any application layers.

When Horizon Mirage performs backups of endpoints, endpoint images are sent up to the datacenter over the WAN. An endpoint image includes the updated layers sent down from the datacenter, as well as user changes to the endpoint.

The purpose of a Branch Reflector is to reduce bandwidth usage over the WAN by performing layer updates to endpoints within the remote-office LAN.

horizon-mirage-branch-office-deployment

Figure 2: Layer Updates with a Horizon Mirage Branch Reflector

You can designate one or more existing Mirage-managed endpoints in a remote office as Branch Reflectors. No special setup, installation, or infrastructure is required. With a few clicks in the UI, you have created a Branch Reflector from an endpoint.

Only the Branch Reflector communicates with the Mirage Server. The Branch Reflector downloads the differences between the IT-managed layers in the datacenter and the layers on the branch-office endpoints. Then the Branch Reflector compiles the bits locally to build a new set of IT-managed layers, and distributes these layers to peer PCs over the local LAN.

The Branch Reflector thus serves as an update service for peer PCs in the branch office. Instead of connecting over the WAN to the distant Mirage Server in the datacenter, remote endpoints can connect to the local Branch Reflector over the LAN to receive layer updates.

So, back to no neckties in the paper shredder. Every tool has its purpose. Horizon Mirage Branch Reflectors are for efficient delivery of layer updates to remote-office endpoints, not for endpoint backups. And remember to keep your necktie out of the paper shredder.

For more information about Horizon Mirage layers and Branch Reflectors, see the VMware Horizon Mirage 4.0 Reviewer’s Guide.

by Tina de Benedictis at May 31, 2013 01:01 PM

Save 50% on BETA Course: VMware vCloud Automation: Install, Configure, Manage [V5.2] – BETA

VMware Education & Certification Blog

VMware offers BETA courses to those wanting to participate in finalizing the near complete course. You save 50% off the course price.  Register today as BETA courses fill up quickly.

VMware Horizon Mirage: Install, Configure, Manage [V4.0]-BETA
Location: classroom delivery in San Jose, CA, USA
Time:  July 8-11 @ 9:00 PDT

See course description and register today! Or see what other VMware BETA Courses are available.

by Elaine Sherwood at May 31, 2013 12:45 PM

A New Key Financial Metric for IT’s Cloud Journey

VMware Accelerate

Author: Mark Sarago

Working with numerous customers on their journey to the cloud has exposed the Accelerate team to a number of metrics that are used to determine an organization’s health and overall value to the business. Let’s focus on a new financial metric that is gaining popularity: private cloud versus public cloud cost per workload.

In their seminal paper, The Balanced Scorecard—Measures that Drive Performance, published in the Harvard Business Review, Robert Kaplan and  David Norton introduced the balanced scorecard as a performance measurement framework. It built on traditional financial measures by adding important non-financial performance indicators to the mix. As a result, it gives executives and managers a more balanced view of organizational performance.

The balanced scorecard has proven to be an effective method of communicating an organization’s overall strategy by establishing a balanced set of tangible goals and the framework of measuring progress toward those goals. The balanced scorecard suggests that we view the organization from four separate perspectives, and to develop metrics, collect data, and analyze the data relative to each of the perspectives, which are:

  1. Financial Perspective – To succeed financially, how should we appear to our shareholders?
  2. Internal Business Perspective (Process) – To maximize our business value, at which processes must we excel?
  3. Customer Perspective – To achieve our vision, how must we appear to our customers?
  4. Innovation and Learning Perspective – To achieve our vision, how will we sustain our ability to change and improve?

CIOs quickly saw the legitimacy of the balanced scorecard and have successfully used it when communicating strategy to their team members, and the value of their information technology activities in relation to their organization’s business executives and customers.

Each of the four perspectives is important, but the one that gets the most attention from business executives — and seems to cause the most concern and confusion for CIOs — is the Financial Perspective performance measurement. It can also be said that the Financial Perspective performance measures are the most important for business executives because the primary language of business is conducted in financial terms – How much will it cost? How much will this save over time? What is the financial break-even period? What is the ROI? — and so forth.

CIOs have responded to the Financial Perspective performance measures of their balanced scorecards by tracking financial metrics such as:

  • Actual to Budget: How does actual OpEx spend compare to the original OpEx budget?
  • Forecast Accuracy: Is the accuracy of the OpEx spend forecasts over the past 12 months within plus/minus two percent?
  • Cost-Per-Business-Unit Trend: Is the IT total cost of ownership (TCO) per unit of business output (e.g., airline seat mile flown, mortgage transaction count, automobiles manufactured) increasing or decreasing over time?

With the advent and popularity of cloud concepts and technologies for IT organizations, we now ask:  What would a CIO want to see as a financial metric in the balanced scorecard to represent their organization’s journey to the cloud?

A few organizations I have met with recently, and which have mature metrics tracking and reporting in place, have already answered the question. They measure their IT TCO per workload in their private cloud against the price of hosting the same workload on a public cloud service such as Microsoft’s Azure or Amazon Web Service’s EC2. When doing so, they also add data transfer into the cost, that is, the cost of communicating the data in and out of the service to computational workload costs incurred.

The metric that compares private cloud workload cost versus all-in public cloud workload pricing is extremely valuable to the CIO. If your private cloud workload cost is lower than public cloud workload pricing, you are showing immediate business value through your IT operation. Conversely, if your private cloud costs are too high, business management is certainly justified to ask: Why should we use your service if we can get it cheaper from a public cloud provider?

Some organizations are so confident in calculating the cost of their private cloud costs per workload and the efficiency of their operation that they have started to build in an added twist. These efficient operations are using the difference or spread in costs between private and public solutions as IT operational “profit.” In turn, the “profit” is used to acquire new equipment and software as they refresh their private cloud going forward. These organizations are truly running IT like a business.

If you aren’t familiar with the balanced scorecard for IT, please give it a deeper look. While doing so, also consider including a new metric to the Financial Perspective performance measures, and include the private cloud versus public cloud cost per workload.

——–

Mark Sarago is a strategist with VMware Accelerate Advisory Services.

VMware AccelerateTM Advisory Services can help you define your IT strategy through balanced transformation plans across people, process and technology. Visit our Web site to learn more about our offerings, or reach out to us today at accelerate@vmware.com for more information.

Would you like to continue this conversation with your C-level executive peers? Join our exclusive CxO Corner Facebook page for access to hundreds of verified CxOs sharing ideas around IT Transformation right now by going to CxO Corner and clicking “ask to join group.”

by Heidi Pate at May 31, 2013 12:29 AM

May 30, 2013

The New Test Drive program from NetApp and VMware lets SMBs experience firsthand how virtualization can benefit their storage networks.

VMware for Small-Medium Business Blog

NetApp and VMware are very aware of what SMBs are struggling with when it comes to backup and have developed a joint hardware/software solution that should alleviate their concerns. SMBs that purchase an affordable storage solution from NetApp and upgrade to VMware with vSphere with Operations Management Enterprise (VSOM Ent) will be able to quickly and easily add backup and disaster recover capabilities to their existing vSphere environment. This gives SMBs the ability to implement these more advanced storage capabilities using the VMware vCenter console they have grown used to working with.

You can also take a look back at the other posts, “What’s the True Cost of Virtual Network Storage to the SMB?” and “Virtualization and Mid-Size Businesses: What’s the Hold Up?

If you’re interested in seeing just how easy it is to protect your valuable applications and add backup and disaster recovery to your virtualized network, you can download a free 90-day trial of NetApp virtual appliance and VMware software here.

Follow VMware SMB on Facebook, Twitter, Spiceworks and Google+ for more blog posts, conversation with your peers, and additional insights on IT issues facing small to midmarket businesses.

by VMware SMB at May 30, 2013 09:58 PM

Save 10% on June’s Live Online Training Schedule in EMEA

VMware Education & Certification Blog

VMware is offering 10% discount on all direct delivered Live Online classes scheduled in EMEA through 30 June! Check out the list of available courses below and register today.

Remember to use referral code: EMEALOL10 when registering, to get your discount!

Course Name

Start date

Duration
(Days)

Language

Register Now!

VMware vCenter Configuration Manager for Virtual Infrastructure Management [V5.5]

10/06/2013

3

English

Register

VMware vSphere: What’s New [V5.1]

10/06/2013

2

English

Register

VMware vSphere: Install, Configure, Manage [V5.1]

17/06/2013

5

English

Register

VMware vCloud Director: Install, Configure, Manage [v5.1]

17/06/2013

3

English

Register

VMware vCenter Operations Manager: Analyze and Predict [V5.x]

20/06/2013

2

English

Register

by Jill Liles at May 30, 2013 03:28 PM

May 29, 2013

SAP HANA on VMware vCloud Suite

VMwareTV

See how virtualizing SAP HANA on VMware vCloud Suite increases IT agility, simplifies management and lowers total cost of ownership. Learn more: http://vmwar...
From: vmwaretv
Views: 515
5 ratings
Time: 14:08 More in Science & Technology

by vmwaretv at May 29, 2013 11:44 PM

Scott Lundstrom (IDC) on Healthcare Performance and Infrastructure

VMwareTV

Hear from Scott Lundstrom, Group VP, IDC Health IT Insights at HIMSS 2013 reflected on how providers, for the last few years, have been focused on electronic...
From: vmwaretv
Views: 40
0 ratings
Time: 02:36 More in Science & Technology

by vmwaretv at May 29, 2013 11:31 PM

Integrating Horizon View and Horizon Workspace, Part 1

VMware Technical Communications Video Blog

This video shows how to set up Horizon View and Horizon Workspace to provide users with a single, integrated point of access to their virtual desktops.


by Chuck Potter at May 29, 2013 09:43 PM

Configuring Horizon Workspace to integrate with Horizon View, Part 2

VMware Technical Communications Video Blog

This video shows how to configure Horizon Workspace to synchronize with Horizon View so that users can launch their desktops from Horizon Workspace.


by Chuck Potter at May 29, 2013 09:39 PM

VMware Forum 2013: Visitor Testimonials

VMwareTV

At the recent VMware Forum events, we spoke to some of the visitors to see what they learned and enjoyed most about the event.
From: vmwaretv
Views: 19
0 ratings
Time: 01:47 More in Science & Technology

by vmwaretv at May 29, 2013 08:51 PM

A Day in the Life of Sydney Adventist Hospital with VMware View

VMwareTV

The hospital embarked on an aggressive digization project almost a decade ago. Now, with the major systems in place, this video looks at a typical day in this specialist hospital. The heads...
From: vmwaretv
Views: 299
1 ratings
Time: 03:16 More in Science & Technology

by vmwaretv at May 29, 2013 08:08 PM

VMworld 2013 US: Last Chance for Early Bird

VMworld Blog

Save $500 Off Onsite Registration - Early Bird Rate Ends 6/10

Time is running out to register for VMworld 2013 at an early bird discount rate! Join us in San Francisco on August 25–29 for the 10th annual VMworld and learn how to extend the benefits of virtualization to all data center services.

At VMworld 2013, you’ll gain the tools you need to transform conventional remedies into seamless, agile solutions that dramatically simplify your operations by taking advantage of:

    • More than 350 in-depth sessions
    • 26 Hands-On Labs
    • 275 sponsors and exhibitors in the Solutions Exchange
    • Networking opportunities with industry experts and other IT professionals

Still on the fence? VMworld’s content catalog will launch June 7th so you can learn more about the unique education opportunities. The content catalog is your guide to sessions and speaker information, giving you the ability to customize your calendar and plan out each conference day.

Register before June 10th and save $500 off onsite registration pricing. Together, we can evolve from the ordinary and leave the pitfalls of legacy computing behind. This is VMworld 2013 – 10 years of Defying Convention.


We look forward to seeing you there!

May 29, 2013 06:03 PM

What Do We Mean by IT Services in the Cloud Era?

VMware Cloud Ops Blog

By Kevin Lees

You hear it all the time from cloud evangelists: instead of delivering based on projects, IT should now be delivering around a common set of services.

It’s not a new idea—but cloud computing promises to finally make it a reality.

Before we get too excited, though, we should ask: what do we actually mean by cloud services? That’s not something cloud advocates always make clear.

So here’s an example:

The other week I was talking with a customer who runs a cloud that supports production dev test environments for a  government agency. These environments are in turn supporting mission-critical applications that play a major role in maintaining the public’s health.

From a service perspective, the tenant ops team is identifying and building a set of common development platforms as virtual applications. In this case each platform consists of three tiers, with each tier running a Windows operating system that’s been pre-built to meet government security policies. The composite platforms all have monitoring drivers already installed, and also feature commonly-used development environments – in this case they’re either a Microsoft dot-net type environment or Java-based.

Collectively, that creates a common virtual dev test vApp pre-built with a lot of the core capabilities and requirements to do this type of mission-critical application development. My customer’s team is then offering this multi-tier stack as a “service” via self-service on demand provisioning.

In the past, it could have taken two to three months to stand up something like this for a new round of development and testing. Now, with these prepackaged, common services, a new development environment can be deployed in less than an hour..

It’s a great example of how quickly you can provision, not only from infrastructure perspective, but so that developers don’t have to repeatedly start out with raw infrastructure and build-in all of their own environments.

This standardized, pre-packaged development environment can also be used across multiple development teams and even across multiple departments. Each may need to do some tweaking for their particular area, but it saves everyone an enormous amount of work.

For future updates, follow @VMwareCloudOps on Twitter and join the conversation using the #CloudOps and #SDDC hashtags.

by Kevin Lees at May 29, 2013 04:00 PM

Collecting diagnostic information from vSphere using the vSphere Web Client

VMware Support Insider

When working with VMware Technical Support you will routinely be asked to provide diagnostic log bundles from your vSphere environment.  Our technical support staff use these in their investigation of your reported issues and in some instances to determine root cause.

We have a new video today which discusses and demonstrates how you can use the vSphere Web Client to collect the diagnostic information for the ESXi and vCenter Server systems, which run in your vSphere 5.1 environment.

This video is specifically geared towards users of our vSphere 5.1 product suite.

In this tutorial you will be guided through the necessary steps for gathering the log bundles from your vSphere 5.1 systems using the vSphere Web Client.

When the vSphere Web Client is connected to the vCenter Server system, you can select hosts from which to generate and download system log files and the option to include vCenter Server and vSphere Web client logs.

For additional information, see VMware Knowledge Base article Collecting diagnostic information for ESX/ESXi hosts and vCenter Server using the vSphere Web Client (2032892).

Note: For best viewing results, ensure that you have the 720p setting selected and that you are viewing using the full screen mode.

by Graham Daly at May 29, 2013 01:45 PM

vExpert 2013 awardees announced

VMTN Blog

We’re pleased to announce the list of vExperts for 2013. Each of these vExperts have demonstrated significant contributions to the community and a willingness to share their expertise with others. We are blown away by the passion and knowledge in this group, a group that is responsible for much of the virtualization evangelism that is taking place in the world — both publicly in books, blogs, online forums, and VMUGs; and privately inside customers and VMware partners. Congratulations to you all!

We have named 581 vExperts, which is the largest group yet in the 5-year history of the program. For many, this is the first year they’ve been a vExpert, yet they’ve been working with virtualization and VMware products for years. We’ve also seen a large group come in from emerging markets and non-English-speaking regions of the world.

I want to personally thank everyone who applied and point out that a “vExpert” is not a technical certification or even a general measure of VMware expertise. The judges selected people who were particularly engaged with their community and who had developed a substantial personal platform of influence in those communities. There were a lot of very smart, very accomplished people, even VCDXs, that weren’t named as vExpert this year. We’ll be reaching out to you to discuss the program and ways to successfully increase your community involvement.

If you feel like you were not selected in error, that’s entirely possible. The judges may have overlooked or misinterpreted what you wrote in your application. In addition, we used the sophisticated Big Data Platform called Microsoft Excel for much of our analysis, and there was some cutting and pasting, so we could have introduced discrepancies. Email us at vexpert@vmware.com and we can discuss your situation.

If you were selected as a vExpert 2013, we’ll be conducting the on-boarding throughout this week, so hold tight and expect future communication from us soon. You must successfully be enrolled in our private vExpert community to be listed in the vExpert directory and to be alerted to opportunities like the beta programs and complimentary licenses that we offer to vExperts.

Congratulations to all the vExperts, new and returning. We’re looking forward to working with you this year.

John Troyer, Corey Romero,
and the VMware Social Media & Community Team

First Name Last Name Twitter Username
Abdullah Abdullah @do0dzZZ
Mark Achtemichuk @vmMarkA
Rotem Agmon @RotemAgmon
Pietro Aiolfi @aiolfip
Niklas Akerlund @vNiklas
Eiad Al-Aqqad @virtualizationT
Ashraf Al-Dabbas @azdabbas
Cesar alcacibar @calcacibar
Urs Stephan Alder
Alex Amaya Alex_Emulex
Nick Anderson @speakvirtual
Magnus Andersson @magander3
Joshua Andrews @SOStech_WP
Daniel Ang
Tolga ANIT @tolgaanit
Tim Antonowicz @timantz
Gonzalo Araujo @gonzaloaraujoc
Michael Armstrong virtsouthwest
Michael Armstrong m80arm
John Arrasjid @vcdx001
henk arts @vmhenk
Erkal ASLANKARA @erkalaslankara
Steve Athanas @steveathanas
Brian Atkinson vmroyale
Josh Atwell josh_atwell
Burke Azbill @TechnicalValues
Martijn Baecke @baecke
Doug Baer @dobaer
Kees Baggerman @kbaggerman
Iain Balmer balmeri
Erin Banks @banksek
David Barclay @davidbarclay99
Sanjay Basu @sbasu777
Bonnie Bauder @BonnieBauder
Stephen Beaver @sbeaver
Ivo Beerens @ibeerens
Ather Beg @AtherBeg
Kanuj Behl @vmwise
Gunnar Berger @gunnarwb
Daniel Berkowitz @berkowitzdan
Emmanuel BERNARD @veemanuel
Jason Bertini @jlbertini
Ryan Birk ryanbirk
Vincent Blue @vinceblue
Jason Boche @jasonboche
Karol Boguniewicz
Mauro Bonder
Matt Boren @mtboren
James Bowling @vSential
Jeremy Bowman @jeremyjbowman
Jeremie Brison @J_Brison
Marco Broeken @mbroeken
Mike Brown @VirtuallyMikeB
James Brown @vegasvmug
Douglas Brown @douglasabrown
Mike Brown @vMikeBrown
Steve Bruck @vColossus
Damien Bruley @virticfr
Marcel Brunner @VirtualBrunner
Sandy Bryce @sandybryce
Andrew Brydon
Petr Buchmaier
James Burd @TheBurdweiser
Erik Bussink @ErikBussink
Rick Byrne @rickrbyrne
Richard Caldwell @nrcaldwell
Shawn Cannon @rolltidega
Andre Carpenter andrecarpenter
Hersey Cartwright @herseyc
Patricio Cerda @patote83
Samuele Cerutti samuelecerutti
Joe Chan @virtuallyhyper
Mark Chandler
Peter Chang @pupo888
Gabriel Chapman @Bacon_Is_King
Gus Chavira
Fabio Chiodini @FabioChiodini
Brad Christian @bchristian21
Pornpol Chunchadatharn @ThaiVirt
Chris Cicotte @chris_cicotte
Troy Clavell @troyclavell
Ben Clayton @grob4ever
Fletcher Cocquyt @cocquyt
Josh Coen
Kendrick Coleman @KendrickColeman
Mike Colson @mike_colson
Chris Colotti @ccolotti
Alastair Cooke @DemitasseNZ
Barry Coombs @virtualisedreal
Jonathan Copeland VirtSecurity
Michael Corey @michael_corey
Carlo Costanzo @CCOSTAN
Matt Cowger @mcowger
Stephen Crafton
Celia Cristaldo Cantero @celiacri
Tom Cronin
sean Crookston seancrookston
Jay Cuthrell @Qthrul
Ed Czerwin @eczerwin
Lieven D’hoore @ldhoore
Sander Daems sanderdaems
Luigi Danakos NerdBlurt
Andy Daniel @vnephologist
Paul Davey pauld_xtravirt
Alaric Davies @alaricdavies
Simon Davies @EV_Simon
David Davis @davidmdavis
Ron Davis
Hans De Leenheer @hansdeleenheer
Rob de Veij @rvtools
Chris Dearden
Christophe Decanini @vCOTeam
Desh Deepak @contact_desh
Tayfun DEGER @tayfundeger
Luc Dekens @LucD22
Peter Del Rey @PeteDelRey
Luca Dell’Oca @dellock6
Frank Denneman @frankdenneman
Amitabh Dey @amitabhpancham
John Dias @johnddias
Philip Ditzel @philditzel
Kevin Divine @kevindivine1
Jeramiah Dooley @jdooley_clt
Geoff Douglass @geoff_douglass
Sunny Dua @sunny_dua
Sean Duffy @shogan85
Adam Eckerle @eck79
Robert Edwards @bobbygedwards
Ricky El-Qasem @rickyelqasem
Karim Elatov @virtuallyhyper
Khaled Eldosuky vmmanco
Scott Elliott @Fulcrum72
Mike Ellis @v2Mike
Niels Engelen nielsengelen
Raymon Epping @repping
Duncan Epping @DuncanYB
Vladimir Eskin @eskinv
Frank Fan @frankfan7
Joe Filippello @joefilippello
Tomas Fojta @fojta
Mike Foley @mikefoley
Christopher Forbis @Chris_Forbis
Stephen Foskett @SFoskett
Tony Foster @wonder_nerd
Eric Fourn @efourn
Liselotte Foverskov @LFoverskov
Andy Fox
Jonathan Franconi @jfranconi
Jonathan Frappier @jfrappier
Edwin Friesen edwinfriesen
Dan Frith @penguinpunk
Manlio Frizzi @mfrizzi
Yusuke Fujita
Rod Gabriel @ThatFridgeGuy
Alex Galbraith @alexgalbraith
Lorenzo Galelli @Virtually_LG
Simon Gallagher @vinf_net
Evgeny Garbuzov @_EGarbuzov
Marc GAUB @mgaub
Jason Gaudreau jagaudreau
Charlie Gautreaux chuckgman
Earl Gay @earlg3
Chris Gebhardt @chrisgeb
RAMESH GEDDAM
Justin Giardina @jgiardina
Paul Gifford cloudcanuck
Rocky Giglio @rockygiglio
Tim Gleed @timgleed
Jose Luis Gomez Ferrer de Couto @pipoe2h
Changbin Gong @changbin2011
Jose Maria Gonzalez @jose_m_gonzalez
Larry Gonzalez @virtualizecr
Jesus Gonzalez SouthFLVMUG
Ben Goodman @benontech
Sergey Gorlinsky
Anton Gostev @gostev
Brian Gracely @bgracely
Eric Gray @eric_gray
Luke Gray @VirtualLukeG
Florian Grehl @virten
Paul Grevink @paulgrevink
Edward Grigson @egrigson
Stephane Grimbuhler @sgrimbuhler
Josep Maria Gris @josemariagris
Giuseppe Guglielmetti @gguglie
Curtis Gunderson @cagunlabs
NATHAN GUSTI RYAN nathan_gt_ryan
Forbes Guthrie @forbesguthrie
Patrick Häfner
Brandon Hahn @brandonhahn
Edward Haletky @Texiwill
Paul Hall macjunkie
Chris Halstead @chrisdhalstead
Andrew Hancock @einsteinagogo
Ulli Hankeln @sanbarrow
Kenneth Hansen
Christoph Harding @cdommermuth
Jon Harris @JonHarrisNM
Justin Hart @jghartin
Michael Hart @csilouisville
bilal hashmi @hashmibilal
Rasmus Haslund @haslund
Archie Hendryx Archie_Hendryx
Drew Henning @DrewHenning
Dave Henry @davemhenry
Paul Henry phenrycissp
Joachim Heppner joachimheppner
Hector Herrero @nheobug
Minako Higuchi @mihiguch
Shigeru Hihara
Bill Hill @virtual_bill
David Hill @davehill99
Larus Hjartarson @lhjartarson
Cormac Hogan @VMwareStorage
Tom Hollingsworth @networkingnerd
Wade Holmes @wholmes
Rodney Hope @RodHope
Cody Hosterman codyhosterman
Tom Howarth @tom_howarth
Nick Howell @that1guynick
William Huber @huberw
Sylvain HUGUET @vshuguet
Sven Huisman @svenh
Marc Huppert @MarcHuppert
Sungho Hwang @yueisu913
TORU IIJIMA @hamboxes
Deddy Iswara
Jeff Jackson
Mohamed Jamal-Eddine MouhamadOnline
Duncan James @Dunc_James
Duco Jaspars @vConsult
Christophe Jauffret @tuxtof
Steve Jin sjin2008
Christian Johannsen @cjohannsen81
Phillip Jones @p2vme.com
Ryan Johnson @tenthirtyam
Thomas Jönsson
Lior Kamrat #LiorKamrat
Steven Kaplan @ROIdude
Damian Karlson @twitter
Ferdinand Karner
Manabu Kato @mankatou
Gopinath Keerthyrajan @gopikeerthy
Kevin Kelling @blueshiftblog
Joe Kelly @virtualtacit
Mostafa Khalil @mostafaVMW
Gurusimran Khalsa @gurusimran
Elias Khnaser @ekhnaser
Craig Kilborn @vmfcraig
Charles Kim @racdba
Brian Kirsch @bckirsch
Clinton Kitson @clintonskitson
Gert Kjerslev @gertkjerslev
David Klee @kleegeek
David Klem @davidklem
Brian Knudtson @bknudtson
Daniel Koeck
Pete Koehler @vmpete
Kenichiro Komai
Askar Kopbayev
Mikael Korsgaard Jensen @jekomi
Julius Kovac @JPerformer
Matthew Kozloski
Artur Krzywdzinski @artur_ka
Masaomi Kudo @interto
Andrew Kuftic @ajkuftic
Bryan Kuhn btkuhn
Grzegorz Kulikowski psvmware
Akihito Kumagai @vakuma00
Christopher Kusek @cxi
William Lam lamw
Yendis Lambert @yendislambert
Jason Langer @jaslanger
Henry Langner @hanky_q
Jason Langone langonej
THOMAS LAROCK @SQLRockstar
Javier Larrea @javichumellamo
Shafay Latif Shafay2000
Justin Lauer @Justin_Lauer
Jarret Lavallee @withoutmoop
Dave Lawrence @thevmguy
Eric Lee @ericblee6
Michael Leeper @mleeper
Matthew Leib @mbleib
Andre Leibovici @andreleibovici
Maciej Lelusz @maciejlelusz
Leandro Ariel Leonhardt leonhardtla
Andreas Lesslhumer @lessi001
Dwayne Lessner @dlink7
steve lester @stevelm33
Michael Letschin @mletschin
Joerg Lew @joerglew
Todd Lewey @tlewey
Matt Liebowitz mattliebowitz
Ferry Limpens @Ferry_virtual
Ben Lin @blin23
Simon Long simonlong_
Tomislav Loparic
Gabriel Lowe @gabeontap
Scott Lowe otherscottlowe
Scott Lowe @scott_lowe
Angelo Luciani @angeloluciani
Roger Lund @rogerlund
Tony Lux tonylux
Todd Mace @mctodd
Mario Mack @vMario156
Mahmoud Magdy @_busbar
Ryan Makamson @virt_pimp
Munishpal Makhija
Agustin Malanco
Lauren Malhoit @malhoit
Michael Malizhonak
Brad Maltz @bmaltz
Howard Marks @DeepStorageNet
David Marshall @vmblogcom
Nick Marshall @nickmarshall9
Frederic Martin @vmdude_fr
Piotr Masztafiak @pmaszt
Andrea Mauro @Andrea_Mauro
Andrew Mc Daniel vmskills
Ryan McBride RyanMcBride81
Jason McCarty @jasemccarty
Sam McGeown @sammcgeown
Stuart McHugh @Stu_mchugh
Matt McLaughlin @matthewmcl
Jack McLeod @mccloudoncloud
Bruce McMillan @BruceMcMillan
Paul McSharry @pmcsharry
Jonathan Medd @jonathanmedd
Cedric Megroz @cmegroz
Roy Mikes @teovmy
Remigiusz Mikula @remik72
Jim Millard @millardjk
Andrew Miller andriven
Scott Miller scottalanmiller
Zach Milleson @zmilleson
Chuck Mills @vchuckmills
Stuart Miniman @stu
Dave Mishchenko
Brian Mislavsky @bmislavsky
Yasumasa Mita @ymita
Eric Mitchell @EricSMitchell
Tsuneyuki Mitsugi @tunemicky
Christian Mohn @h0bbel
Eric Monjoin @emonjoin
Massimiliano Mortillaro @darkkavenger
Massimiliano Moschini @maxmoschini
Julien Mousqueton @JMousqueton
Alex Muetstege @amuetste
Kyle Murley @kylemurley
Matt Murray @mattmurray
Alex Musicante @AlexMusicante
Chris Nakagaki @zsoldier
Shigeyuki Nakamoto @shnakamo
Jason Nash @TheJasonNash
Ayan Kumar Nath
Vladimir Nazarov
Suresh Babu Nekkalapudi
Mike Nelson @nelmedia
Andreas Neufert @AndyandtheVMs
John Nicholson @Lost_Signal
Mike Nisk
Rickard Nobel
Keith Norbie @keithnorbie
Robert Novak @gallifreyan
Karel Novak @novakkkarel
Mikko Nykyri @BackupMikko
Geoffrey O’Brien @geoffreykobrien
Bryan O’Connor @bryanoconnor21
Josh Odgers @josh_odgers
Daichi Ogawa @ogawad_jp
Keiichiro Okada
Hirokazu Onishi
Roberto Orayen @elblogdenegu
Grant Orchard @grantorchard
Tim Oudin @toudin
Olivier PARCOLLET @DS_45
Manish Patel @mandivs
Matt Patterson @usrlocal
Justin Paul @recklessop
Frank Brix Pedersen @frankbrix
Andreas Peetz @VFrontDe
Ernesto Pellegrino @vmugonline
Ivan Pepelnjak ioshints
Robert Petruska
Andre Pett @ap_unleashed
Daniel Pfuhl @pfuhli
Didier Pironet @dpironet
Joep Piscaer @jpiscaer
Tony Pittman @pittmantony
Robert Plankers @plankers
Gregory Plough
Andrey Pogosyan @and_mv
Marco Pol @mrpol
Eric Pond @eric_pond
Evgeny Ponomarenko
Michael Poore @mpoore
Trevor Pott @cakeis_not_alie
Valentin Pourchet @vpourchet
Mike Preston @mwpreston
Alexander Prilepsky @vmworld5
Jason Puig
Brent Quick @brent_quick
Diego Adrian Quintana @daquintana
Bas Raayman @basraayman
Iwan Rahabok @e1_ang
Faisal Rahman
Conrad Ramos @vNoob
Fabio Rapposelli @fabiorapposelli
Abdul Rasheed @AbdulRasheed127
Massimo Re Ferre’
Itzik Reich @itzikr
Alan Renouf @alanrenouf
Michael Requeny @requenym
Juan Manuel Rey @jreypo
Phillip Reynolds @philvirtual
Brandon Rice
Brandon Riley @BrandonJRiley
Jane Rimmer @Rimmergram
Dominic Rivera @virtualdominic
Rawlinson Rivera @PunchingClouds
Trevor Roberts Jr @VMTrooper
Gregg Robertson @greggrobertson5
William Robertson @robertson_texas
Jake Robinson @jakerobinson
Josep Ros @josepros
Samir Roshan @kooltechies
Jonas Rosland @virtualswede
Michel Roth @michelroth
Ethan Rowe @rowe_ethan
Kyle Ruddy @ruddyvcp
Peter Rudolf @prudolf
Christian Rudolph @ChrisRu82
Maish Saidel-Keesing @maishsk
Chad Sakac sakacc
Alexander Samoylenko @vmcompany
Nicolai Sandager @nsa42
Jim Sanzone @theSANzone
Herry Sarip @athlon_crazy
Prasenjit Sarkar @stretchcloud
Mundakkal Satyajith
Scott Sauer @ssauer
Tim Scheppeit
Rick Scherer @rick_vmwaretips
Erik Schils @erikschils
raphael schitz @hypervisor_fr
Rick Schlander @vmrick
Erik Scholten @scholten + @vmguru_nl
Mike Schubert
Greg Schulz @storageio
Andrew Scorsone @ascorsone
Derek Seaman @vDerekS
Simon Seagrave @Kiwi_Si
Anil Sedha @anilsedha
Vladan SEGET @vladan
Adam Sekora asekora@trace3.com
Philip Sellers @pbsellers
Yuri Semenikhin @YuriSemenikhin
Takao Setaka @twtko
Dave Shackleford @daveshackleford
Eric Shanks @eric_shanks
Frank Shepherd @fgshepherd
Ichiro Shibutani
Greg Shields @concentratdgreg
Akio Shimizu shmza
Jason Shiplett @jshiplett
Takayuki SHIROYAMA @tshiroyama
Anthony Siano @advistorTony
Rocco Pierpaolo Sicilia roccosicilia
Maqsood Siddiqui @Maqsood_s
Maria Sidorova
Eric Siebert @ericsiebert
Herman Silva BigHermWPB
Josh Sinclair
Aravind Sivaraman @ss_aravind
Eric Sloof @esloof
Timothy Smith @tsmith_co
Dennis Smith @DennisMSmith
Brian Smith bsmith9999
Larry Smith @mrlesmithjr
Marcelo Soares @mtsoares42
Jens-Henrik Soeldner
Nicolas Solop @nsolop
Preeda Somabut @Tvirtualization
Piergiorgio Spagnolatti @drakpz
Stephen Spellicy @spellicy
Rynardt Spies @rynardtspies
Anthony Spiteri anthonyspiteri
Ruben Spruijt rspruijt
David Stafford @dstafford
David Stamen @iamddavee
Bobby Stampfle @BobbyFantast1c
Michael Stanclift @vmstan
Arron Stebbing @ArronStebbing
Chris Sterner @sternerc
Vaughn Stewart @vStewed
Christian Strijbos @vChrisSt
Hugo Strydom @hugo_strydom
Greg Stuart vDestination
Reuben Stump @ReubenStump
brian suhr @bsuhr
kirill sukhostavskiy @k_sukhostavskiy
Vijay Swami @vjswami
Yusuke Takahashi @v_takahashi
Wee Kiong Tan tanwk3
Takayuki Tanaka
Florent Tastet @sccyul
Stuart Thompson virtual_stu
Arjan Timmerman @Arjantim
Brian Tobia @btobia
Tolga Tohumcu @tolgatohumcu
Joshua Townsend joshuatownsend
Keith Townsend @virtualizedgeek
Mark Trimble @metrimble
Benjamin Troch @virtualb_me
Lars Troen @larstr
Tommy Trogden @vTexan
George Trujillo @GeorgeTrujillo
Tadao Tsuchimura sakuryupapa
Rinat Uzbekov
Martin Valencia @ubergiek
Paul Valentino @sysxperts
Ilann Valet @ivalet
Wil van Antwerpen @wilva
Herco van Brug @brugh
Marcel van den Berg @marcelvandenber
Peter Van den Bosch @petervdnbosch
Robert van den Nieuwendijk @rvdnieuwendijk
Frans van Rooyen
Kenneth van Surksum @kennethvs
Alan van Wyk @bulletprooffool
Gabrie van Zanten @gabvirtualworld
Matt Vandenbeld @vcloudmatt
Rick Vanover @RickVanover
Mark Vaughn mvaughn25
Ravindra Venkat @ravivenk
Riccardo Ventura @hypervise
Matt Vogt @mattvogt
Constantin Vvedenskiy @ConstantinV
Yohan Wadia @yohanwadia88
Christopher Wahl @ChrisWahl
Rob Waite rob_waite_oz
John Walsh @jwalsh2
Wei-Ren Wang
Joseph (Joey) Ware Joey_vm_ware
Tim Washburn @mittim12
Craig Waters @cswaters1
Michael Webster @vcdxnz001
Edwin Weijdema @Viperian
Shane Weinbrecht @shizrah
Jay Weinshenker @aus_effendi
David Weinstein @virtualrx
Christopher Wells @vsamurai_com
Paul Whitman
Humphrey Widjaja
Jerry Wilkin @jerrywilkin
Shane Williford coolsport00
Michael Wilmsen @wilmsenit
Michael Wilson @m1kew1lson
George Winter @vmxgeorge
Bertram Wöhrmann
Julian Wood @julian_wood
Darren Woollard dawoo
Avram Woroch @AvramWoroch
Eric Wright @discoposse
Brian Wuchner @bwuch
Kazuma Yamabe @virtapp_life
Yoshimasa Yamamoto @vYamamoty
Miho Yamamoto @mihochannel
Kong Yang @kongyang
Adem Yetim @ademyetim
Emad Younis emad_younis
Wen Yu @wensteryu
Erik Zandboer @erikzandboer
Preetam Zare @techstarts
Marek Zdrojewski @MarekDotZ
Anton Zhbankov @antonvirtual
Dennis Zimmer vmachine_de
BO BO ZIN @bebezet
Calvin Zito @hpstorageguy
dave zylyk

by John Troyer at May 29, 2013 12:28 AM

May 28, 2013

VMware Announces New vCloud Hybrid Service

VMware for Small-Medium Business Blog

On Tuesday, May 21st VMware announced the vCloud Hybrid Service – VMware’s new hybrid cloud service that seamlessly extends the data center to the cloud. The service was unveiled during a live webcast at our Palo Alto campus to thousands of customers, partners, employees, media and analysts around the world.

Customers will now be able to extend their existing infrastructure, skill-sets, tools and processes while leveraging the availability of a reliable VMware-owned and operated public cloud infrastructure. Now our customers will have the ability to evaluate and choose a public cloud based on their specific business and IT needs – whether it be vCloud Hybrid Service or a VSPP partner.

Register now to view the replay of this online event.

Additional Resources

Follow VMware SMB on Facebook, Twitter, Spiceworks and Google+ for more blog posts, conversation with your peers, and additional insights on IT issues facing small to midmarket businesses.

by VMware SMB at May 28, 2013 08:48 PM

Question of the Week: VCP5-DCV

VMware Education & Certification Blog

This week’s “Question of the Week” comes from the VMware Certified Professional 5-Data Center Virtualization (VCP5-DCV) Official Study Guide.


Which of the following cannot be used by the VMs on a VSA cluster?
a. vMotion
b. DRS clusters
c. vSphere HA
d. NFS

See below for the answer


Not sure of the answer? You can learn more about this topic in our VMware vSphere: Install, Configure, Manage course.

Answer: d. NFS

by Angela Guzman at May 28, 2013 07:00 PM

Take note of those KB articles we present

VMware Support Insider

When filing a Support Request in the My VMware Support Portal, you may notice there is a selection of five KB articles presented underneath the form after you tell us a few things about your issue. These articles are put there to help you resolve your issue before you have even spoken to someone, but did you know there is some intelligence behind what we show you? Today we will explore how we pick these KB articles. It is a rather involved and complicated process, but read on to discover how it’s done.

When you file a Support Request with VMware, everything related to what is done to resolve your issue, is kept as a record. This helps us spot trends, and keep track of issues that may occur frequently. After a certain amount of time, we gather reports to see what common KB articles are used to resolve an issue by the support engineers. The engineers document which KB articles they used in each and every support request they resolve. We then look at the ratings of the article, and how many times it has been linked to. If the KB article maintains a rating above three stars and is linked-to quite frequently, we know that KB article is particularly useful in helping people to resolve that issue. That’s not the only thing we do. We also comb through the case notes of Support Requests to pick up out trends, such as did a virtual machine lock up, did your ESXi host crash, did vCenter Server stop responding, etc.

This is all categorized by different selections our Technical Support Engineers make as they work your support request. We call these vCats which is the problem category, for example Host/Installation, and then a subcategory to describe the issue, a vSubCat, such as a Storage Failure. All this information is trended and tracked. We analyze the data with utilities that can spot these trends automatically although there is a manual component to it. Combing through this data can be a time consuming task. We have ways to automate our reports, but much of the qualitative aspect of analyzing this data is done by perusing spreadsheets, reading comments from the Technical Support Engineers, and the feedback we receive from customers.

Finally, when choosing the KB articles for the My VMware Support Portal we tend to target KB articles that utilize multiple steps and checks to troubleshoot the issue. We call these types of articles Resolution Paths. These step through specific issues step by step, and provide you a process for which to resolve your issue, such as an issue of virtual machine not being able to start up. In this way, it is also revealed how a Technical Support Engineer may troubleshoot the issue.

All in all, the idea is that these KB articles will help you save some time. Hopefully, you find these useful, and think about taking a gander at these KB articles next time an issue arises.

by Bryan Hornstein at May 28, 2013 06:42 PM

The Open Software-Defined Data Center Incubator

Office of the CTO Blogs

As I’ve shared before, the Distributed Management Task Force (DMTF) has been engaged for many years in the development of IT infrastructure management standards. In fact, these standards have become the underpinnings of today’s systems management infrastructure and also enable the scalability of current data centers. Beyond this activity, the DMTF also developed standards for virtualization and cloud management that have now been adopted on a national and international level. These standards have helped improve the interoperability of management tools and standards has improve portability of workloads between various platforms as outlined in the recent Open Data Center Alliance tests on VM imteroperability.

Building upon this work, the DMTF today announced the Open Software-Defined Data Center Incubator – a forum where the IT community can discuss and develop definitions, architectures and use cases for a software-defined data center that will be interoperable via open and standard interfaces. This is similar to the work it took on several years ago doing the same for Cloud Computing. The incubator establishes a venue for the industry to come together and create foundational whitepapers on what SDDC is, how it will be used and identify the gaps in the standards that will be needed to provide the interoperability and choice customers demand.

The goal of this group is to develop the foundational documents over the next 12 months that will enable the industry to create further standardization of management infrastructure for the next generation of data centers. The Incubator brings the industry together to collaborate and create the blue print for the next generation of IT which in turn puts customers on a path for improved agility, flexibility and new levels of automation.

I think the activity will help clarify the road to achieve the vision and the promise of SDDC. And at the same time, we’ll see the positive side effects of standards, which are increased choice for the customer, reduced cost for the vendors and improved interoperability of data center and cloud computing environments.

Are you on the path to building a SDDC? What tools do you need to get there? Please share your feedback and comments below.

May 28, 2013 05:12 PM

Proving Performance: Hadoop On vSphere – A Bare Metal Comparison

VMware vSphere Blog

When architects think about putting big data and Apache Hadoop on virtualized commodity servers they usually see virtualization as a performance deterrent.  Virtualization software is just that—software. Additional software layers are overhead and they must make it run slower.

Not true.

In a recent performance study by VMware, they demonstrated that performance between bare-metal deployments and virtualized deployments can even exceed bare-metal performance in certain cases when using multiple virtual machines allowing for parallelism.

Just like the data industry proved that distributed querying is faster and more scalable than a single monolithic source, VMware believes that performance can improve with virtualization and is working on a variety of projects including Hadoop Virtualization Extensions (HVE) and Serengeti, as well as working with vendors like Cloudera to certify their Hadoop distributions on vSphere.

As the whitepaper, Hadoop Virtualization Extensions on VMware vSphere® 5.1 points out, Hadoop’s topology awareness mechanism needs to be extended (with HVE) to account for the virtualization layer and refine data-locality-related policies so the multiple daemons are optimized to work together seamlessly.  Breaking data and compute apart and placing them in virtual machines also allows for rapid provisioning, better elasticity, hardware utilization and builds in high availability into the processes.

Breaking apart Hadoop compute and storage into separate, virtualized machines can speed up processing of jobs.

However, no admin worth their salt is going to do any of this if performance decreases. While VMware continues to invest in improving performance for virtualizing Hadoop, we can prove today that performance is on par, and show the potential for the future.

The Virtualized Hadoop Benchmark

The benchmark used the TeraSort Suite found in the Cloudera distribution. This example application is often considered to be representative of real Hadoop workloads. It creates, sorts, and validates a large number of 100-Byte records, with results reported for eighty billion records (also referred to as the “8TB” dataset).

>> Complete details of the configuration can be found in the technical whitepaper, Virtualized Hadoop Performance on VMware vSphere® 5.1.

The Benchmark Results

To create the benchmark, each 8TB test was run several times and the best results were used. The same hardware was used to run natively as well as with 1, 2 or 4 VMs per host.

This chart shows for that individual processes, once virtualized, there is a minor performance degradation ranging between -4.9% to -12.9%. However, once multiple virtual machines are used on the same hardware, performance improves and closes the gap with bare-metal ranging between -7.1% to +1.8%.  The data point to call out here is the fact that the TeraSort process was actually faster virtualized than bare-metal showing significant promise of the virtual platform.

It is also useful to take a look at how these processes work in succession and against utilization. As the diagram shows above, the processes vary in run times but ultimately finish together. From here, we derive that performance for 4 virtual machines per host is on par with bare-metal Hadoop deployments.

Additional Reading:

by Stacey Schneider at May 28, 2013 05:10 PM

Don’t Leave Security Off the Table

VMware Consulting Blog

By Bill Mansfield, VMWare Professional Services Consultant

I find myself at a large majority of my enterprise customers discussing non-technical issues. Brokering a truce between operational organizations that have evolved in their own silos, and who don’t play well with others.  In the early days of Virtualization, it was difficult to get three key parties in the same room in large shops to hash out architectural requirements and operational process. Networking, Storage, and Virtualization were typically at odds with each other for any number of reasons, and getting everyone to play nice was difficult. These days, it’s primarily Security that’s left out of the room.  A large government customer recently told me flat out “We don’t care about security”, implying that it was another department’s responsibility. Indeed, the SecOps (Security Operations) and SecEngineering (Security Engineering) teams had never been brought into a Virtualization meeting in the 7 years virtualization had been in house.

This segregation of the Security team, whether intentional or not, causes some serious problems during a security incident. Typically SecOps only has a view into the core network infrastructure and some agent based sensors that may or may not make it onto the VMs that are being investigated. Network sensors typically only exist at the edges of the network, and occasionally at the core in larger shops. Any VM to VM traffic may or may not even transit the physical network at any given time.  For a long time, the ability to watch Virtual Switches for data was not available and the Security teams got used to that. These days, all the traditional methods of monitoring and incident investigation are readily available within vSphere. The vSphere 5.1 Distributed Virtual Switch can produce NetFlow data for consumption by any number of tools. RSPAN and ERSPAN can provide full remote network monitoring or recording. Inter VM traffic is no longer invisible to Security tools. Security teams just need to be involved, and need to hook their existing toolset into the Software-defined data center. No need to reinvent the wheel. Sure we can enhance capabilities, but first we need to get the Security teams to the table and allow them to use the tools they already have.

So what are some typical questions from Security Operations about the Software-defined data center? Some of them I can answer, some of them are still works in progress.  All of which deserve their own write-ups.

How do we monitor the network?

  • Port Mirroring has been around for a while, and Netflow, RSPAN and ERSPAN capabilities now allow us to function with a great deal of industry standard tools.

How do we securely log events?

  • SEIM integration is fairly straightforward via Syslog or direct pulls from the relevant vSphere databases.

Where do we put IDS/IPS?

  • Leave the traditional edge monitoring in place, enhance with solutions inside the vSphere stack.
  • vSphere accommodates traditional agent based IPS as well as a good number of agentless solutions via EPSec and NetX API integration.  Most of the major vendors have some amount of integration.

Can you accommodate for segregation of duties?

  • vSphere and vCNS vShield Manager both provide role based segregation and audit capability.

Can you audit against policy?

  • This is a big topic. We can audit host profiles and admin activity in vCenter. We can audit almost anything in vCenter Configuration Manager at all levels of the stack.
  • We can baseline the network traffic of the enterprise with vADP (Application Discovery Planner, not to be confused with our backup API.) We can periodically check for deltas with vADP to find anomalous traffic.

What tools work with VMware to assist with forensics and incident management?

  • Again, this is another big topic. Guests are just data, and a VM doesn’t know when it’s had a snapshot taken. I’ve worked with EnCase, CAINE, BackTrack, and other tools to look at things raw. Procedurally it’s fairly simple. DD off the datastore to run through one of the usual tools and/or run the tool against copies of the VMDKs in question.
  • On the Network side, tie ERSPAN to Wireshark, and use traditional methodology. If you’re feeling clever you can look at live memory by recording a vMotion.

How does legal chain of custody work for forensics on a VM?

  • I’m not a lawyer. I’m not a certified forensic examiner. So, I’ve always had someone from a firm who specializes in forensics like Foundstone with me to handle the paperwork.

Is this a comprehensive list? Not at all. It’s just the beginning. The first step is getting Security to the table, and getting them actively participating in design and operational decisions. With higher and higher consolidation rations it becomes more important than ever to instrument the Virtual Infrastructure. For larger organizations, tools like EMC NetWitness can provide insight into all aspects of software-defined data center. SEIM engines like ArcSight can correlate events and provide an enterprise wide threat dashboard. For small organizations, there’s a large amount of Open Source tools available.

Security professionals, where are you seeing resistance while trying to do your jobs in the software-defined data center? What requirements are you finding most challenging to address? Let us know in the comments below!

Bill Mansfield has worked as a Senior Security Consultant at VMware for the past 6 years. He has extensive knowledge on transitioning traditional security tools into the virtual world.

by VMware Consulting at May 28, 2013 04:15 PM

VMware Store purchasing and order details

VMware Support Insider

We have a new video today which discusses and demonstrates the VMware Online Store purchasing process and order details.

In this short video tutorial you will see how you can use My VMware to place and review your orders made through the VMware Store.

For additional information, refer to VMware Knowledge Base article: VMware Store purchasing and order details (2006980).

Note: For best viewing results, ensure that the 720p setting is selected and view using the full screen mode.

by Graham Daly at May 28, 2013 02:44 PM

May 24, 2013

VXLAN Series – How VTEP Learns and Creates Forwarding Table – Part 5

VMware vSphere Blog

In this post I am going to describe how VTEPs learn about the virtual machines connected to the logical Layer 2 networks. The learning process is quite similar to a transparent bridge function. As transparent bridges learn based on the packets received on the bridge ports, the VTEP also learn based on the inner and outer header of the packets received.

Let’s take an example to illustrate the VTEP learning process.

Example Deployment with Two Hosts

As shown in the diagram above there are two Hosts (Host1, Host 2) on which VTEPs are configured, and each host has one virtual machine connected to logical layer 2 network, identified as VXLAN 5001. Both the virtual machines are powered on and both VTEPs have joined the multicast group 239.1.1.100. Each VTEP has its own forwarding table, which is initially empty as shown in the diagram below.

Initial State of the Forwarding Table

How do the forwarding tables get populated?

We will take an example of virtual machine on Host 1 trying to communicate with the virtual machine on the Host 2. First, an ARP request is sent from the virtual machine MAC1 to find the MAC address of the virtual machine on Host 2. The ARP request is a broadcast packet.

Host 2 VTEP – Forwarding table entry

The diagram above shows the packet flow:

  1. Virtual machine on Host1 sends ARP packet with Destination MAC as “FFFFFFFFFFF”
  2. VTEP on Host 1 encapsulates the Ethernet broadcast packet into a UDP header with Multicast address “239.1.1.100” as the destination IP address and VTEP address “10.20.10.10” as the Source IP address.
  3. The physical network delivers the multicast packet to the hosts that joined the multicast group address “239.1.1.10”.
  4. The VTEP on Host 2 receives the encapsulated packet. Based on the outer and inner header, it makes an entry in the forwarding table that shows the mapping of the virtual machine MAC address and the VTEP. In this example, the virtual machine MAC1 running on Host 1 is associated with VTEP IP “10.20.10.10”. VTEP also checks the segment ID or VXLAN logical network ID (5001) in the external header to decide if the packet has to be delivered on the host or not.
  5. The packet is de-encapsulated and delivered to the virtual machine connected on that logical network VXLAN 5001.

The entry in the forwarding table of Host 2 VTEP is used during lookup process. The packet flow shown in the diagram below explains the forwarding table lookup for a unicast packet sent from a virtual machine on Host2.

Host 2 VTEP – Forwarding table Lookup

  1. Virtual Machine MAC2 on Host 2 responds to the ARP request by sending a unicast packet with Destination Ethernet MAC address as MAC1.
  2. After receiving the unicast packet, the VTEP on Host 2 performs a lookup in the forwarding table and gets a match for the destination MAC address “MAC1”. The VTEP now knows that to deliver the packet to virtual machine MAC1 it has to send it to VTEP with IP address “10.20.10.10”.
  3. The VTEP creates unicast packet with destination IP address as “10.20.10.10” and sends it out.

The Host1 VTEP receives the unicast packet and it also learns about the location of the virtual machine MAC2 as shown in the diagram below.

Host 1 VTEP – Forwarding table entry

  1. The packet is delivered to Host1
  2. The VTEP on Host 1 receives the encapsulated packet. Based on the outer and inner header, it makes an entry in the forwarding table that shows the mapping of the virtual machine MAC address and the VTEP. In this example, the virtual machine MAC2 running on Host 2 is associated with VTEP IP “10.20.10.11”. The VTEP also checks segment ID or VXLAN logical network ID (5001) in the external header to decide if the packet has to be delivered on the host or not.
  3. The packet is de-encapsulated and delivered to the virtual machine connected on that logical network VXLAN 5001.

As you can see the forwarding table entries are populated based on the inner and outer header fields of the encapsulated packet. Similar to the transparent bridge the forwarding table entries are removed after aging timer expires. One of the common questions I get is what happens after a virtual machine is vMotioned.

In the next few posts I will cover how the forwarding table entries get modified after vMotion of a virtual machine from one host to another.

Here are the links to Part 1, Part 2, Part 3, Part 4.

Get notification of these blogs postings and more VMware Networking information by following me on Twitter:  @VMWNetworking

by Vyenkatesh Deshpande at May 24, 2013 04:31 PM

Securely deliver apps, data and desktops to personal devices

VMware End User Computing

As customers start embracing Bring-Your-Own-Device (BYOD), users can choose their own devices to improve productivity and collaboration. it is critical that all the different components are highly secure as users start using devices that are outside the network perimeter and not controlled by the IT department. Customers researching solutions should delve deeper into the security aspects of the different components. We recently posted couple documents discusses basic security consideration and the security features in Horizon Workspace v1.0  and how security is top of mind when we built VMware Horizon Workspace.  We will be addressing security issues including privacy, compliance and risk management standards in the next security blog.

by Cynthia Hsieh at May 24, 2013 04:12 PM

Service Definition – The Tradeoff Between Standardization and Agility

VMware Cloud Ops Blog

By Rohan Kalra and Pierre Moncassin

In the client server era, IT demonstrated responsiveness by designing infrastructure to meet the technical requirements of various applications that the business relied on to do work. Developers spec’d systems. Ops built the systems. Devs changes the specs. The back and forth continued until the systems were live in production.

There were attempts to enforce architecture standards that were designed to control the chaos of having every system be a unique work of art, but business needs for whatever typically trumped IT needs for simplicity. If developers for a critical business application demanded some unique middleware configuration, they usually got what they requested.

As a result, most IT organizations have racks full of one-off systems that are unique and often hard to support.  “A museum of past technology decisions” is one way to describe the typical enterprise datacenter landscape.

Cloud changes everything

Cloud computing changes this paradigm. With cloud, developers and users experience the value of fast access to standardized commodity compute resources. By accepting and designing around standard resource configurations, developers no longer need to predict usage levels to set capacity requirements, and no longer have to wait through long procurement cycles.  Similarly, by accepting one-size-fits-all, consumers can get immediate access a wide range of ready to use apps.

The trade-off IT consumers make is essentially one of releasing control over technical assets in order to gain control over business processes. In return for accepting increased standardization (typically at the ‘nuts and bolts’ level, e.g. infrastructure, catalog, OLA’s, charging models), they get unprecedented agility at the business level (“on-demand” IT both in the form of provisioning and scaling and usage levels change).

In the cloud era, IT demonstrates responsiveness by giving developers and users immediate access to standard IT services accessed and then scaled on demand.

As a result, IT success in the cloud era depends, to a large extent, on IT consumers to understand the tradeoff and appreciate the value of standardization.

Start with common service definition

The first step to achieving standardization is getting agreement on a common service definition. This includes getting multiple groups that traditionally have requested and received custom work, to agree on the details of standard services. There is an art in building this consensus, as different consumers with unique requirements need to come together to make this a success.The key is communication and consistency starting for from collection of requirements to delivery of services. (more on this process in a future blog post)

Another critical step is standardizing and centralizing an organization’s service catalog and portal. This allows for a consistent and secure customer experience that provides access across all services regardless of underlying environment – physical, virtual, as well and private and public cloud resources.

Standardization also enables IT to be a true service broker, picking the right environment to meet the needs of each service or workload. A service broker strategy includes policy-based governance, service-based costing, and end-to-end life cycle management across all types of internal and external services.

Today, organizations that understand the need for standardization are the ones transforming themselves to be more responsive with cloud-based operating models. For them, standardization is the driver to both increase business agility, and become more efficient from an OPEX perspective.

Key actions you can take:

1. Acknowledge the problem.

Is this true within your organization?

  • Multiple single points of failure?
  • Specific individual’s supporting legacy applications without documented runbooks or recovery procedures?
  • Continuous fire-fights due to complex architectures leading to business downtime?
  • Inefficient manual procedures?
  • War room like setups to solve problems with limited to no root cause analysis and problem solving measures for the future.

2. Before embarking on the journey, take stock candidly of what is actually being delivered today. Ask probing questions from your current-state services.

  • What services levels are actually being delivered (not just promised ‘on paper’)
  • What services look ‘gold plated’ and could be simplified?
  • What services are never, or very occasionally used?

Once you have a firm baseline, you are ready to start the journey.

3. Understand it’s a journey and it takes time. There is no big bang answer to solving this problem.

  • Start with small wins within your organization’s cloud transformation.
  • Development environments are ideal proving grounds.
  • Initialize the cloud first policy.

4. Create a cloud strategy and focus on building business consensus through business communication and outreach.

For more on this topic, join Khalid Hakim with John Dixon of Greenpages for the May 30th #CloudOpsChat on Reaching Common Ground When Defining Services!

For future updates, follow us on Twitter at @VMwareCloudOps and join the conversation by using the #CloudOps and #SDDC hashtags.

by CloudOps Team at May 24, 2013 04:00 PM

Build Skills Deploying a Virtual Desktop Infrastructure with View: Install, Configure, Manage

VMware Education & Certification Blog

System administrators and system integrators responsible for deploying  the VMware® virtual desktop infrastructure can build valuable skills in deploying a virtual desktop infrastructure by attending VMware View: Install, Configure Manage.  This 4-day course teaches you to:

  • Install and configure View components
  • Create and manage dedicated and floating desktop pools
  • Deploy and manage linked-clone virtual desktops
  • Configure and manage desktops that run in local mode
  • Configure user profiles with View Persona Management
  • Configure secure access to desktops through a public network.
  • Use ThinApp to package applications.

The course is available in a variety of delivery methods from the

Sign up today!  Sign up for a course in the US or Canada that begins before Jun 30 and save 15%.

by Elaine Sherwood at May 24, 2013 12:43 AM

SMBs Prefer Horizon View Over Other VDI Solutions – Two Years in a Row

VMware End User Computing

By Courtney Burry, Director, Desktop Product Marketing, VMware

I’m excited to share that a recent study conducted by Spiceworks with small and midsized businesses (SMBs) revealed that VMware Horizon View is the solution of choice over the competition for a second year in a row.

By an almost 2-to-1 margin, Horizon View remains the top choice with the number of VDI agents deployed increasing from 53 percent in December 2011 to 57 percent in February 2013. The three leading VDI solutions found among SMBs were VMware Horizon View (57 percent), Citrix Virtual Desktop (31 percent) and NComputing vSpace Client (6 percent).

We’ve been working tirelessly with our technology partners to introduce new programs and innovative architectures to bring down the cost of VDI, and today Horizon View is more affordable than ever before. This means that modernizing Windows desktops by transforming them into a centralized manage service is an option not just for big budget enterprises but also for cost-conscious SMBs. In addition, this study validates what we’ve been hearing anecdotally all along from customers and channel partners – they prefer Horizon View because it is easy to use and it works the way they expect it to. This serves as a strong a testament to all the hard work the product management team has put into advancing the product to make it a truly, world-class product.

If you’re an SMB and think virtual desktops might be a good solution for your organization, visit our product page to learn more. You can also download the complete study conducted by Spiceworks here.

* Spiceworks, the vertical network of more than 2.5 million IT professionals worldwide, conducted the proprietary study on desktop virtualization industry trends in March 2013 with nearly

by VMware EUC at May 24, 2013 12:11 AM

May 23, 2013

It All Started with Server Virtualization

VMware Accelerate

Rob Jenkins, Director of VMware Accelerate Advisory Services in EMEA, presented on the journey to virtualized compute — from server consolidation, to automation, to game changing ITaaS — at IDC’s Cloud and Virtualisation event in Dublin this month. At the time, no one predicted the impact server virtualization would have on the IT industry. VMware’s early customers achieved unheard of cost savings and ROIs, leading to unprecedented adoption of this technology by more than 500,000 customers.

You can follow Rob @cloud_rob

by Heidi Pate at May 23, 2013 09:44 PM

Physical or Appliance – Upgrading to vCenter Server 5.1

VMware Support Insider

The other day we received this question from a customer via Twitter:

@VMwareCares planning to upgrade to 5.1 from 5.0 vcenter. What’s recommended physical or appliance? Ups and downs side of each?

We thought a few more of you might have the same questions so we decided we would take the opportunity to explain the differences between vCenter server and vCenter appliance and under what situation which one should be opted for, over the other.

The vCenter Server Appliance (vCSA) is a preconfigured Linux-based virtual machine optimized for running vCenter Server and associated services. Versions 5.0.1 and 5.1 of the vCSA uses PostgreSQL for the embedded database instead of IBM DB2, which was used in vCenter Server Appliance 5.0 The vCSA embedded postSQL DB supports 5 hosts / 50 virtual machines, with an Oracle DB the vCSA can support 1000 hosts and 10,000 vms. If you configure your vCSA to use an external instance of Single Sign On (SSO), the external SSO instance must be hosted on another vCenter Server Appliance; it cannot be hosted on a Windows machine.

vCenter Server can be installed on a windows Guest OS and can be connected to Oracle or Microsoft SQL. SSO can be installed on the same Guest OS or can be on a different machine. It should be noted that patching of of the vCenter Appliance is not supported.

Below is a table listing more of the differences between the products.

Features vCenter Server vCenter Server Appliance
Guest OS Any Supported Guest OS Preconfigured Linux-based virtual machine (64-bit SUSE Linux Enterprise Server 11)
Database Supported Versions SQL Server and Oracle. PostgreSQL (built-in ) can have 5 hosts and 50 Virtual Machines.Supported External Oracle database.
System Requirement 2 vCPU and 4 GB RAM 2 vCPU and 4 GB RAM
Platform Physical or virtual machine Virtual Appliance
Installation Using binary provided in .zip or .ISO Deploying OVF
Update Manager Can be installed on same vCenter Server or on separate Guest OS. Separate install
Single Sign On (SSO) Can be installed on same vCenter Server or separate Guest OS. Pre-installed.
Networking IPv6 and IPv4 Support IPv4 Support
Linked Mode Supported Not Supported
SRM (Site Recovery Manager) Compatible with SRM Compatible with SRM
vSphere Web Client Can be installed on same vCenter server or separate machine. Pre-Installed.
Syslog Server Can be installed on vCenter Server or separate server and configured using plug-in. Pre-installed and does not have plug-in.
ESXi Dump collector Can be installed on vCenter Server or on a separate Guest OS. Pre-installed and does not have plug-in.
Multi-site SSO Supported Not Supported. Basic SSO only.
VSA (vSphere Storage Appliance Supported Not Supported
VMware View Supported Not Supported

by Jasbinder Bhatti at May 23, 2013 05:53 PM

What was your favorite VMworld session?

VMTN Blog

This year will be the 10th yearly VMworld event. (Have you registered for VMworld yet?) In the run up to the event, we’ll be running some features talking about the previous nine years of what I like to think of as the World’s Best (And Most Intense) Technology Conference.

Photo: Peter Tsai/Dell

One of the best parts of VMworld is the deep set of breakout sessions. Deeply knowledgable VMware employees, partners, and customers share their experiences in hundreds of sessions over the week. Some of us go to many sessions; some of us wait until afterwards, but the week is always filled with deep technical conversations during every waking hour. Although the conference is now very large and securely based in San Francisco, even in earlier years the conference team shared with us that it was hard to find locations to hold VMworld US and Europe because, unlike other events, we needed a venue with dozens of large breakout rooms!

So here’s the question – what’s been your favorite session or lab at VMworld over the years? Have any stuck in your head years later? And what makes it stand out in your mind? Was it the speaker, the new idea, or even the funny running commentary from your neighbor? Or was it how you had just the right answer when you got back to work and your boss thought you were a psychic genius? Or were you watching the right video presentation at the right time after the event? You can share stories about waiting in line or not getting the sessions you want, but those tend to be less fun.

Leave your story here in the comments and watch out for our series ont the VMworld Blog recapping the history of VMworld and getting ready for this 10th Annual VMworld 2013 in San Francisco where we will, once again, “Defy Convention.”

by John Troyer at May 23, 2013 05:51 PM

Power Management and Performance in ESXi 5.1

VROOM!

Powering and cooling are a substantial portion of datacenter costs. Ideally, we could minimize these costs by optimizing the datacenter’s energy consumption without impacting performance. The Host Power Management feature, which has been enabled by default since ESXi 5.0, allows hosts to reduce power consumption while boosting energy efficiency by putting processors into a low-power state when not fully utilized.

Power management can be controlled by the either the BIOS or the operating system. In the BIOS, manufacturers provide several types of Host Power Management policies. Although they vary by vendor, most include “Performance,” which does not use any power saving techniques, “Balanced,” which claims to increase energy efficiency with minimal or no impact to performance, and “OS Controlled,” which passes power management control to the operating system. The “Balanced” policy is variably known as “Performance per Watt,” “Dynamic” and other labels; consult your vendor for details. If “OS Controlled” is enabled in the BIOS, ESXi will manage power using one of the policies “High performance,” “Balanced,” “Low power,” or “Custom.” We chose to study Balanced because it is the default setting.

But can the Balanced setting, whether controlled by the BIOS or ESXi, reduce performance relative to the Performance setting? We have received reports from customers who have had performance problems while using the BIOS-controlled Balanced setting. Without knowing the effect of Balanced on performance and energy efficiency, when performance is at a premium users might select the Performance policy to play it safe. To answer this question we tested the impact of power management policies on performance and energy efficiency using VMmark 2.5.

VMmark 2.5 is a multi-host virtualization benchmark that uses varied application workloads as well as common datacenter operations to model the demands of the datacenter. VMs running diverse application workloads are grouped into units of load called tiles. For more details, see the VMmark 2.5 overview.

We tested three policies: the BIOS-controlled Performance setting, which uses no power management techniques, the ESXi-controlled Balanced setting (with the BIOS set to OS-Controlled mode), and the BIOS-controlled Balanced setting. The ESXi Balanced and BIOS-controlled Balanced settings cut power by reducing processor frequency and voltage among other power saving techniques.

We found that the ESXi Balanced setting did an excellent job of preserving performance, with no measurable performance impact at all levels of load. Not only was performance on par with expectations, but it did so while producing consistent improvements in energy efficiency, even while idle. By comparison, the BIOS Balanced setting aggressively saved power but created higher latencies and reduced performance. The following results detail our findings.

Testing Methodology
All tests were conducted on a four-node cluster running VMware vSphere 5.1. We compared performance and energy efficiency of VMmark between three power management policies: Performance, the ESXi-controlled Balanced setting, and the BIOS-controlled Balanced setting, also known as “Performance per Watt (Dell Active Power Controller).”

Configuration
Systems Under Test: Four Dell PowerEdge R620 servers
CPUs (per server): One Eight-Core Intel® Xeon® E5-2665 @ 2.4 GHz, Hyper-Threading enabled
Memory (per server): 96GB DDR3 ECC @ 1067 MHz
Host Bus Adapter: Two QLogic QLE2562, Dual Port 8Gb Fibre Channel to PCI Express
Network Controller: One Intel Gigabit Quad Port I350 Adapter
Hypervisor: VMware ESXi 5.1.0
Storage Array: EMC VNX5700
62 Enterprise Flash Drives (SSDs), RAID 0, grouped as 3 x 8 SSD LUNs, 7 x 5 SSD LUNs, and 1 x 3 SSD LUN
Virtualization Management: VMware vCenter Server 5.1.0
VMmark version: 2.5
Power Meters: Three Yokogawa WT210

Results
To determine the maximum VMmark load supported for each power management setting, we increased the number of VMmark tiles until the cluster reached saturation, which is defined as the largest number of tiles that still meet Quality of Service (QoS) requirements. All data points are the mean of three tests in each configuration and VMmark scores are normalized to the BIOS Balanced one-tile score.

Effects of Power Management on VMmark 2.5 score

The VMmark scores were equivalent between the Performance setting and the ESXi Balanced setting with less than a 1% difference at all load levels. However, running on the BIOS Balanced setting reduced the VMmark scores an average of 15%. On the BIOS Balanced setting, the environment was no longer able to support nine tiles and, even at low loads, on average, 31% of runs failed QoS requirements; only passing runs are pictured above.

We also compared the improvements in energy efficiency of the two Balanced settings against the Performance setting. The Performance per Kilowatt metric, which is new to VMmark 2.5, models energy efficiency as VMmark score per kilowatt of power consumed. More efficient results will have a higher Performance per Kilowatt.

Effects of Power Management on Energy Efficiency

Two trends are visible in this figure. As expected, the Performance setting showed the lowest energy efficiency. At every load level, ESXi Balanced was about 3% more energy efficient than the Performance setting, despite the fact that it delivered an equivalent score to Performance. The BIOS Balanced setting had the greatest energy efficiency, 20% average improvement over Performance.

Second, increase in load is correlated with greater energy efficiency. As the CPUs become busier, throughput increases at a faster rate than the required power. This can be understood by noting that an idle server will still consume power, but with no work to show for it. A highly utilized server is typically the most energy efficient per request completed, which is confirmed in our results. Higher energy efficiency creates cost savings in host energy consumption and in cooling costs.

The bursty nature of most environments leads them to sometimes idle, so we also measured each host’s idle power consumption. The Performance setting showed an average of 128 watts per host, while ESXi Balanced and BIOS Balanced consumed 85 watts per host. Although the Performance and ESXi Balanced settings performed very similarly under load, hosts using ESXi Balanced and BIOS Balanced power management consumed 33% less power while idle.

VMmark 2.5 scores are based on application and infrastructure workload throughput, while application latency reflects Quality of Service. For the Mail Server, Olio, and DVD Store 2 workloads, latency is defined as the application’s response time. We wanted to see how power management policies affected application latency as opposed to the VMmark score. All latencies are normalized to the lowest results.

Effects of Power Management on VMmark 2.5 Latencies

Whereas the Performance and ESXi Balanced latencies tracked closely, BIOS Balanced latencies were significantly higher at all load levels. Furthermore, latencies were unpredictable even at low load levels, and for this reason, 31% of runs between one and eight tiles failed; these runs are omitted from the figure above. For example, half of the BIOS Balanced runs did not pass QoS requirements at four tiles. These higher latencies were the result of aggressive power saving by the BIOS Balanced policy.

Our tests showed that ESXi’s Balanced power management policy didn’t affect throughput or latency compared to the Performance policy, but did improve energy efficiency by 3%. While the BIOS-controlled Balanced policy improved power efficiency by an average of 20% over Performance, it was so aggressive in cutting power that it often caused VMmark to fail QoS requirements.

Overall, the BIOS controlled Balanced policy produced substantial efficiency gains but with unpredictable performance, failed runs, and reduced performance at all load levels. This policy may still be suitable for some workloads which can tolerate this unpredictability, but should be used with caution. On the other hand, the ESXi Balanced policy produced modest efficiency gains while doing an excellent job protecting performance across all load levels. These findings make us confident that the ESXi Balanced policy is a good choice for most types of virtualized applications.

by Rebecca Grider at May 23, 2013 05:34 PM

vCenter Server 5.1 Update 1a

VMware Support Insider

Back on April 29th we posted an alert: ALERT: Login issue after updating to vCenter 5.1 Update 1 which detailed a scenario whereby customers might be unable to log in using the vSphere Web Client or domain username/password credentials via the vSphere Client after updating to vCenter 5.1

Tonight at 7:30pm PST, vCenter Server 5.1 Update 1 will be removed from the VMware download site and will be replaced by vCenter Server 5.1 Update 1a. The primary aim of the 5.1 U1a release is to address the regression that was identified in 5.1 U1.

Customers are urged to read the README included with the new update before they apply the update.

Details of what has and has not been fixed are provided in this KB article: http://kb.vmware.com/kb/2037410

by Rick Blythe at May 23, 2013 02:35 AM

May 22, 2013

How Laurens County Health Care System is Now Realizing 65% Decrease in Hardware Costs / Uptime in the 99.999% Range by Deploying Meditech in a Virtualized Environment

VMware for Small-Medium Business Blog

Post by Brandon Sweeney, Vice President U.S. Mid-Market Businesses

Many midmarket organizations face challenges in equipping their IT infrastructure to support privacy and regulatory protocols and ensure consistent uptime, but those in the health care field have a unique challenge. For a healthcare professional, the reliability needed in their IT environment can literally be the difference between life and death. Finding an IT solution that supports compliance and ensures utmost performance is essential.

Pivotal Turning Points

Laurens County Health Care System, a 90 bed health care organization in Clinton, South Carolina was looking to deploy the Meditech clinical information system to improve patient information and care, as well as enable computerized physician order entry (CPOE). With 30 physical servers in their data center, an offline server could cause multiple facets of their infrastructure to go down. Additionally they found their staff of 11 IT workers often fighting fires instead of proactively addressing business needs. Hospital employees were using so many different laptops and desktops that just maintaining current versions of basic software was difficult. Downtime was up to 40% in the physical data center.

Running Meditech in the current environment was not feasible. Laurens County Health Care System needed to deploy a cost-efficient plan and looked to VMware for a solution.

The Solutions Journey

By deploying VMware vSphere ®, the IT staff virtualized their domain controllers, then implemented Exchange Server internally and moved it to a virtualized infrastructure. By adding vSphere vMotion, they enabled high availability for their virtualized file and print servers, as well as various applications.

“It’s a nightmare maintaining 50 physical PCs on nursing carts. Today, I can roll out 30 virtual desktops in 15 minutes and manage them all from a central location,” said Joe Lovell, IT Infrastructure Manager at Laurens County Health Care System. “Obviously, there are other vendors out there, but given the technical strengths, ease of implementation, and the centralized management capabilities, VMware was the obvious choice.”

VMware View enabled the organization to employ a Bring Your Own Device (BYOD) policy for physicians, while also solving compliance and security concerns and ensuring the mobility necessary for excellent patient care.

“Physicians are very excited that they can use their own devices from anywhere to access Meditech and review EMRs [electronic medical records] and patient data without physically having to come into the hospital,” says Gina Driggers, IT Director at Laurens County Health Care System.   “With View, no matter what device they’re using, all the computing happens here in the data center, which is a huge security and compliance safety net.”

Immediate Business Benefits and Looking Forward

Using VMware vSphere, Laurens County Health Care System successfully deployed Meditech in a virtualized environment and is on target to Stage 1 of Meaningful Use.

In addition, the virtualized data center servers now see uptime in the 99.999% range. On top of that, by switching from PCs to thin clients, the organization was able to decrease hardware costs by 65%.

Next up, Laurens County Health Care System is looking to tackle disaster recovery with vCenter Site Recovery Manager. “ And VMware is not just for large hospitals,” says Lovell. “It is definitely something that can help smaller hospitals that don’t have the financial resources and the IT resources of a larger organization.”

Read the full success story about Laurens County Health Care System here.

I look forward to continuing to share these stories and demonstrating how VMware can help you simplify your infrastructure and deliver real world results for your business.

Have you faced similar IT or compliance challenges in your organization? How have you solved them?

I look forward to hearing your thoughts in the comments.

Until next time,

Brandon

Follow VMware SMB on Facebook, Twitter, Spiceworks and Google+ for more blog posts, conversation with your peers, and additional insights on IT issues facing small to midmarket businesses.

by VMware SMB at May 22, 2013 08:48 PM

About VMware Blogs

Planet V12n

The best virtualization blogs from around the planet.

Read the latest from Planet V12n

VMware Blogs

VMware Blogs RSS | OPML

Last updated:June 07, 2013 12:42 AM UTC