Covering Disruptive Technology Powering Business in The Digital Age

image
THINK YOU’RE READY FOR BIG DATA AND IOT? STANDARD TESTS JUST AREN’T ENOUGH
image
May 19, 2016 News

Rapid time to market is becoming increasingly important in the rollout of new applications and services; in simpler terms, everyone wants to be first. So new architectures are planned with virtual environments and hybrid clouds on the drawing board, and they implemented to then learn that customers complain about a loss of quality in VoIP service and online gamers about long ping times. This waiting for customer complaints is one of three basic ways to learn about the performance and resilience of your network, but it certainly isn’t the most promising. Waiting for a hacker to paralyze your network is the second option, but its popularity has limits, too. The third option is called “test.”

Not all test methods are suitable for ensuring the availability of services and applications, however. Approaches to validation of performance and security, with no realistic assumptions about application loads and attack techniques, quickly lead to a false sense of security. Only tests based on realistic conditions receive reliable information about the behavior of the network and security infrastructure. Big data and especially the Internet of Things (IoT) will generate significantly higher loads, and the best way to determine how a network will handle these loads is to make sure that each component required for the provision of services and applications is tested under the most severe expected load conditions.

The Best Place to Start Is at the Beginning

The connected world is no longer just a buzzword, it is reality. More than five billion devices are already connected to the Internet, and the rate of newly connected devices will only accelerate with the proliferation of IoT. Forecasts indicate that by 2020, about 50 billion devices will be connected to the Internet—10 times more than today.1 Many of these devices run complex applications that need to communicate with each other around the clock. These increasing user end points not only automatically generate more data, but they place greater demands on the performance and availability of a network infrastructure. In particular, Web 2.0, HD video and social networking, combined with big data and IoT, have a virtually unlimited hunger for bandwidth. In a report published in January 2016 entitled “ENISA Threat Landscape 2015,” the European Agency for Network and Information Security (ENISA) said the number of DDoS attacks with a bandwidth of over 100Gbps has doubled in 2015 and will continue to increase.

Meeting these growing demands on a network infrastructure requires a massive upgrade to the data center, ranging from transition of top-of-rack connectivity from 10GbE to 25GbE and 50GbE, to enhancing the core network with 100GbE technology. The expected result of this type of upgrade is significantly higher data rates with approximately the same footprint and power consumption, as well as a higher server density and reduced cost per bandwidth unit. But what guarantees do enterprises have that they can achieve these expectations under real-world conditions?

In addition, the unique characteristics of network devices, storage and security systems—coupled with the virtualization of resources, the integration of cloud computing and SaaS—can significantly slow the introduction and delivery of new services. Ensuring availability of the data rates needed to deliver new services anytime, anywhere, requires infrastructure tests that go beyond standard performance tests of individual components.

Customers and internal stakeholders don’t care how many packets a web-application firewall can inspect per second. They only care about the application response time, which depends on a number of factors. These factors include the individual systems in the network and their interaction, the application-specific protocols and traffic patterns, and the location (and time of day) of the security architecture. Therefore, testing the entire delivery path of an application—from end to end—under realistic conditions is imperative. It means using a realistic mix of applications and traffic workloads that recreate even the lowest-layer protocols. Simple standardized tests such as I/O meters in complex environments are simply not enough.

Testing Under Real Conditions

Enterprise data centers need a test environment that reflects their real load and actual traffic, including all applications and protocols, such as Facebook, Skype, Amazon EC2 / S3, SQL, SAP, Oracle, HTTP and IPSec. It’s meaningless, and dangerous, to test a data center infrastructure with 200Gbps of data when the live network experiences peak loads over 500Gbps. Additionally, when testing, consider illegitimate traffic, such as increasingly frequent DDoS and synchronized attacks on multithreaded systems. Since attack patterns are constantly changing, timely and continuous tests are crucial. One way to ensure the consistency and timeliness of the testing is to use an external service that can analyze current attack patterns and update the test environment continuously and automatically.

Testing complex storage workloads is only achievable with real traffic. Cache utilization, deduplication, compression, and backup and recovery must be tested with all protocols—SMB2.1 / 3.0, NFS, CIFS, CDMI and iSCSI—and optionally tuned to ensure compliance with defined service levels.

Although the need for stringent testing is obvious for a new data center, it’s equally important when consolidating or integrating hybrid clouds. The reason is that each new application, and even updates and patches of existing applications, can alter the performance and response times of the network.

DIY or TaaS?

Ensuring optimal data center performance requires investment not only in test systems but also in the employees entrusted to manage them. In addition to development and testing of a network infrastructure, equally important is development of a qualified test team. Enterprises don’t typically hire dedicated test engineers, and network and security architects are not always proficient in the design and execution of comprehensive tests to ensure their applications and IT systems can handle strenuous loads and sophisticated attacks.

If budget is an issue, external TaaS (testing as a service) offerings can be a useful addition to an in-house solution, especially for larger projects. An external service provider can help determine which systems are the best fit in an existing environment, or before the rollout of a new demanding application such as online gaming. Performance and reliability tests of wireless environments or WAN assessments are other examples of complex projects for which an external TaaS provider is well suited.

So the choices are simple: wait for customer complaints to learn about the performance and resilience of your network, wait for a hacker attack to paralyze your network or put your network and applications to the “real” test with solutions and offerings that replicate your specific load requirements. It’s a no-brainer.

This article was originally published on www.datacenterjourna.com and can be viewed in full

 

 

(0)(0)

Archive