Navigating FedRAMP’s 2024 Updates – What CSPs Need to Know

September 27, 2024 at 8:00 am by Amanda Canale

Since July 2024, the Federal Risk and Authorization Management Program, or FedRAMP, has undergone significant changes that will greatly impact the way cloud service providers (CSPs) are able to obtain authorization to work alongside the federal government and its agencies. 

Prior to the recent revision, the authorization process was conducted via one of two methods: Authorize to Operate (ATO) by way of agency authorization, and Provisional Authority to Operate (P-ATO) via the Joint Authorization Board (JAB). Both methods included a three-step process: Preparation, Authorization, and Continuous Monitoring. 

Now, there is a singular authorization process, ATO, making P-ATO no longer an option for CSPs. 

AI-created photo of the American flag that is made up of binary code.

Recent Changes to Authorization Process

As part of the revision, FedRAMP has introduced several measures that are aimed at speeding up the authorization process without sacrificing the necessary level of scrutiny.

Streamlined Authorization Process 

One of the notable changes involves the modernization of the process for achieving ATO. Previously, obtaining FedRAMP authorization was a complex and time-consuming process, involving multiple steps and significant investment from CSPs. However, with these new changes, FedRAMP is moving towards streamlining the authorization process while maintaining the integrity of security standards, meaning there will be only one authorization method for CSPs — ATO.

With FedRAMP’s new streamlined process, comes the dismantling of the JAB and the P-ATO process, and the implementation of the new governing body, the FedRAMP Board. The board will, “approve and help guide FedRAMP policies, bring[ing] together the federal community to create a robust authorization ecosystem,” said Eric Mill, the executive director for cloud strategy at the U.S. General Services Administration (GSA).

Due to the single authorization method, communication will become more fluid, ensuring that CSPs can address agency concerns in real time, which is expected to expedite approvals. The program has also emphasized more transparent guidelines, clarifying the steps needed to achieve compliance. This reduces the guesswork for cloud service providers and enables them to better align their security practices with federal requirements from the onset, rather than having to backtrack and make corrections during the authorization process. 

The goal of this new streamlined process is to get more CSPs through the authorization pipeline faster while still maintaining robust security standards, which is a stark difference from the P-ATO process that was only conducted during specific times of the year. This effort was created based on the feedback within the cloud service industry where companies voiced concerns about the length of time it takes to gain authorization, especially given the rapid pace at which technology changes.

Photo of a table displaying binary code and cybersecurity lock symbol. On top of the table is an iPad being held by two hands and the iPad has an American flag on it.
 
Emphasis on Automations

Among the most impactful changes is the increased emphasis on continuous monitoring and automation. The use of automated tools that can assess security controls in real-time allows cloud service providers to detect vulnerabilities swiftly and efficiently throughout the entire FedRAMP process. This shift towards automation aims to minimize human error, improve response times to threats, and ensure that cloud environments remain secure as they continue to grow and change. Continuous monitoring will now play a more central role in FedRAMP, allowing agencies and cloud providers alike to be better equipped to respond to cybersecurity threats.

This emphasis on automation is supported by a new technical documentation hub that was specifically designed to support CSPs during the authorization process. The automate.fedramp.gov website offers CSPs with all the necessary documentation to support them during the authorization process. This documentation includes detailed technical specifications, best practices, and guidance on managing their authorization packages. 

The intention of this new hub is to provide CSPs with quicker and more frequent documentation updates, improve the user experience for those implementing FedRAMP packages and tools, and to provide a collaborative workflow.

There are plans in place to expand the capabilities of the hub, with the intention to also integrate FedRAMP authorization submissions.

Implementation of Red Teaming 

Previous authorization methods included a three-step process: preparation, authorization, and continuous monitoring. In previous iterations, part of the preparation process for both methods was an initial assessment of the CSOs done by an independent third-party assessment organization (3PAO).

The appointed 3PAOs would conduct a thorough evaluation of the CSP’s security package, which included both a documentation review and testing of the cloud service’s implementation of their security controls. Additionally, CSPs were required to provide monthly and annual security assessments, vulnerability scans, and other documentation to prove their ability to protect federal data as part of their continuous monitoring.

With this new revision, FedRAMP has also introduced a new mandate surrounding red teaming, adding an additional layer of scrutiny for cloud security. Red teaming is an advanced form of ethical hacking where security experts simulate real-world attacks on cloud environments to uncover vulnerabilities that traditional testing methods might miss. This new mandate requires CSPs to undergo periodic red teaming assessments, ensuring that their systems can withstand sophisticated threats that are constantly evolving in the cybersecurity landscape.

By simulating these real-world attacks, red teaming identifies weaknesses before they can be exploited, giving CSPs the chance to proactively address potential threats. It’s a vital step in recognizing the importance of not just meeting baseline security standards but continuously improving security postures to keep pace with emerging threats. 

While this new requirement adds an additional layer to the authorization process, it also provides peace of mind for both the CSPs and government agencies, reinforcing the trust necessary for working with sensitive government data. 

Digital illustration of a government building surrounded by glowing data streams, representing modern technology and cybersecurity.
 

Conclusion

At its core, FedRAMP allows federal agencies to leverage modern cloud technologies while maintaining the necessary security protocols. However, as technology evolves and cybersecurity threats become more sophisticated, FedRAMP has had to adapt to ensure CSPs can remain flexible while still adhering to the government’s stringent security requirements. 

These significant changes reflect not only the evolving world of cybersecurity threats, but also the increasing complexity of cloud environments. This revision highlights the program’s adaptability and commitment to maintaining a high level of security across all federal cloud environments. The foundation laid by these updates will help streamline the authorization process, enhance monitoring capabilities, and ultimately provide greater assurance that government data remains protected in an ever-changing threat landscape.

As these recent changes continue to take effect, they are set to shape the future of cloud security for federal agencies, creating a more secure and efficient path forward for cloud adoption across the U.S. government. SEM will be closely following the ongoing evolution of the FedRAMP process and will continue to provide you with the latest updates and guidance to help you navigate the authorization process effectively.

Protecting Financial and Insurance Data: Key Compliance Mandates to Know

September 20, 2024 at 8:30 am by Amanda Canale

Every day, financial institutions face threats of data breaches, making cybersecurity a critical aspect of their operations. As technology evolves, so do the malicious tactics used by cybercriminals to exploit vulnerabilities in the financial sector. This is where compliance regulations come into play. These regulations are designed to protect sensitive financial information, mitigate cyber risks, and maintain the integrity of the financial system.

At the heart of financial compliance is the responsibility to safeguard consumer data and financial information. Financial institutions, from banks to insurance firms, collect and process vast amounts of personal and financial data, that if breached, can be a major liability to both organizations and individuals alike. This data can include everything from credit card numbers and social security details to transaction histories and insurance policies. Given the sensitivity of this information, these regulatory frameworks were developed to ensure its constant protection. 

Here’s an overview of some of the critical regulations shaping the world of finance compliance.

credit card finance isa

Sarbanes-Oxley Act (SOX)

The Sarbanes-Oxley Act (SOX), passed in 2002, was established to protect investors by improving the accuracy and reliability of corporate financial disclosures and reporting. Although the act focuses on financial transparency and corporate governance, SOX compliance is mandatory for all public companies.

A crucial part of SOX compliance is record retention. Financial and insurance companies must keep a wide range of documents, from financial statements and accounting records to emails and client information, for a specific timeframe. While SOX doesn’t dictate exactly how records should be destroyed, it stresses the importance of maintaining accurate, unaltered data, for specific lengths of time.

When it’s time to securely dispose of expired records, organizations should, at a minimum, implement a risk management  and destruction plan that falls in compliance with NIST 800-88 data disposal standards to ensure sensitive information is destroyed responsibly and in line with SOX requirements.

 Fair and Accurate Credit Transactions Act (FACTA)

The Fair and Accurate Credit Transactions Act (FACTA), enacted in 2003, is a crucial piece of legislation aimed at enhancing the accuracy, privacy, and security of consumer information. FACTA as it stands today, amended the Fair Credit Reporting Act (FCRA) and was introduced to address growing concerns about identity theft and consumer credit reporting practices. 

At its core, FACTA provides consumers with greater access to their credit reports and includes measures to assist with fraud prevention. One of its most notable impacts is allowing consumers to request a free annual credit report from each of the major credit reporting agencies, ensuring individuals can monitor their credit history and identify potential discrepancies. 

While FACTA doesn’t mandate just one specific method for disposing of consumer report information, it allows some flexibility, enabling organizations to choose their disposal method based on the sensitivity of the data and the associated costs. It is, however, recommended to follow NIST 800-88 data disposal standards for secure and compliant destruction of consumer reports.

credit-card-data

General Data Protection Regulation (GDPR)

The European Union’s General Data Protection Regulation (GDPR) has had a profound impact on global financial institutions and their operations. GDPR focuses on data privacy within the European Union and was designed to protect the personal data of the region’s citizens from cyberattacks. Organizations that process data from EU citizens must comply with GDPR, meaning organizations with EU customers, visitors, branches, those offering goods or services in the region, and even cloud computing companies. Essentially, regardless of where the organization is located, if the data of EU residents is involved, compliance with GDPR standards and regulations is non-negotiable. 

The mandate also grants individuals the freedom to have a say in what happens with their data, giving them the right to access, correct, and destroy their data. Organizations must also implement enforce stringent security measures to protect that information from unauthorized access or breaches and maintain transparency about how data is used.  

The GDPR checklist for data controllers is a phenomenal tool designed to help keep organizations on the road towards data security compliance. More information on GDPR’s data destruction best practices can be found here.

Gramm-Leach-Bliley Act (GLBA)

The Gramm-Leach-Bliley Act (GLBA), passed in 1999, focuses on the protection of non-public personal information (NPI) in the financial services sector. The GLBA primarily governs how financial institutions handle the privacy of sensitive customer data and sets strict regulations on how that information can be collected, stored, and shared. By ensuring that businesses adopt responsible data management practices, the GLBA aims to protect consumers from financial and insurance fraud. Financial institutions, such as banks, credit unions, and insurance companies, are required to provide clear and transparent privacy policies, informing customers about the ways their information may be used or shared with third parties.

A key component of the GLBA is the Financial Privacy Rule, which outlines specific guidelines that financial institutions must follow when collecting personal data. This rule requires institutions to give customers the option to “opt-out” of having their information shared with non-affiliated third parties, thereby empowering consumers to have more control over their personal data. 

In 2021, responding to the rise in data breaches, the Federal Trade Commission strengthened data security protocols under GLBA with an updated Safeguards Rule. This rule extends to all non-bank financial institutions, including mortgage companies, car dealers, and insurance companies, ensuring customer financial data is securely protected.

One of the key requirements of the Safeguards Rule is that these institutions must implement a secure disposal policy for customer information within two years of its last use—unless retention is legally or operationally necessary. Although the rule doesn’t list a specific disposal method, following NIST 800-88 data disposal standards is widely regarded as a best practice.

identity-theft

Payment Card Industry Data Security Standard (PCI DSS)

The Payment Card Industry Data Security Standard (PCI DSS) is a set of security standards established by major credit card companies to protect payment card information and ensure the secure handling of credit and debit card transactions. Established in 2004 by major credit card companies, including Visa, MasterCard, and American Express, PCI DSS applies to any organization that processes, stores, or transmits payment card information. The goal of these standards is to minimize the risk of breaches, fraud, and identity theft, and quicken data breach response times by enforcing strict security practices across all entities involved in the payment process. 

PCI Requirement 3.1 specifically mandates that organizations securely dispose of cardholder data that is no longer needed, with the principle, “if you don’t need it, don’t store it.” Retaining unnecessary data creates a significant liability, and only legally required data should be kept. This applies to any organization involved in processing, storing, or transmitting payment card information—from retail businesses and payment processors to banks and card manufacturers.

While PCI DSS does not prescribe a specific method for data destruction, the consequences of non-compliance are severe. To mitigate risks, organizations should have clear policies in place for securely destroying all unnecessary data, including both hardcopy documents and electronic media like hard drives, servers, and storage devices.

For PCI DSS compliance, it’s recommended to follow NIST 800-88 data disposal standards to ensure secure and thorough destruction of cardholder data.

Conclusion

Understanding and complying with these mandates is crucial for financial institutions to navigate the complex regulatory environment. By implementing robust internal controls, risk management protocols, and staying informed about regulatory changes, organizations can uphold the principles of transparency, security, and trust that are fundamental to the industry.

Top 5 SaaS Data Breaches

February 28, 2024 at 8:00 am by Amanda Canale

As of 2023, 45% of businesses have dealt with cloud-based data breaches, which has risen five percent from the previous year. Data breaches have increased with the advancement of cloud-based platforms and software as a service (SaaS). These services offer flexibility to access an absurd number of services on the internet rather than install ones individually. Although this is an incredible technological advancement, there are high-risk factors with data privacy that arise. Information can easily be shared between cloud services, meaning companies must protect their sensitive information at all costs. With the increase in the use of SaaS applications, there are security measures that should be taken to prevent data leaks from happening.

Here’s a rundown of well-known SaaS companies that have experienced significant data breaches and security measures to help prevent similar incidents from affecting you.

Facebook

Facebook has faced multiple data breaches over the last decade, with their most recent one in 2019, affecting over 530 million users. Facebook failed to notify these individual users of their data being stolen. Phone numbers, full names, locations, email addresses, and other user profile information were posted to a public database. Although financial information, health information, and passwords were not leaked, there is still a rise in security concerns from Facebook’s users.

Malicious actors used the contract importer to scrape data from people’s profiles. This feature was created to help users connect with people in their contact list but had security gaps which led actors to access information on public profiles. Security changes were put in place in 2019, but these actors had been able to access the information prior.

When adding personal information to profiles or online services, individuals need to be conscious of the level of detail they disclose as it can be personally identifying.

Microsoft

In 2021, 30,000 US companies and up to 60,000 worldwide companies total were affected by a cyberattack on Microsoft Exchange email servers. These hackers gained access to emails ranging from small businesses to local governments.

Again in 2023, a Chinese attack hit Microsoft’s cloud platform, affecting 25 organizations. These hackers forged authentication to access email accounts and personal information.

Constructive backup plans are crucial for a smooth recovery after a data breach occurs. Microsoft constantly updates its security measures, prioritizing email, file-sharing platforms, and SaaS apps. These cyberattacks are eye-opening for how escalated the situation can become. Designating a specific team for cybersecurity can help monitor any signs of suspicious activity.

Yahoo

Yahoo experienced one of the largest hacking incidents in history, affecting 3 billion user accounts. Yahoo did not realize the severity of this breach, causing the settlement to be $117.5 million. Yahoo offers services like Yahoo Mail, Yahoo Finance, Yahoo Fantasy Sports, and Flickr which were all affected by this breach.

This one-click data breach occurred when a Canadian hacker worked with Russian spies to hack Yahoo’s use of cookies and access important personal data. These hackers could obtain usernames, email addresses, phone numbers, birthdates, and user passwords, all of which are personally identifiable information (PII) and more than enough for a hacker to take over people’s lives. An extensive breach like Yahoo raises concern for its users regarding data privacy and the cybersecurity of their information.

Verizon

From September 2023 to December 2023, Verizon experienced a breach within its workplace. This breach occurred when an employee compromised personal data from 63,000 colleagues. Verizon described this issue as an “insider wrongdoing”. Names, addresses, and social security numbers were exposed but were not used or shared. Verizon resolved this breach by allowing affected employees to get two years of protection on their information and up to $1 million for stolen funds/ expenses.

While this information was not used or extended to customer information, companies need to educate their workplace on precautions for data privacy. If individuals hear that the inner circle is leaking personal information about their colleagues, it raises concern for customers.

 Equifax

Equifax, a credit reporting agency, experienced a data breach in 2017 that affected roughly 147 million consumers. Investigators emphasized the security failures that allowed hackers to get in and navigate through different servers. These hackers gained access to social security numbers, birth dates, home addresses, credit card information, and their driver’s license information.

This failed security check from an Equifax employee caused easy access for these hackers in multiple spots. Taking the extra time to ensure your company has secured loose ties is crucial for reducing attacks.

Conclusion

Data breaches occur no matter a company’s size or industry, but the risks can be reduced with secure and consistent precautions. Data breaches are common, especially with the extended use of cloud platforms and SaaS, but failing to store and transport information among services, to have a documented chain of custody, and data decommissioning process in place all play a role in having your sensitive information being accessed by the wrong kinds of people.

At SEM, we offer a variety of in-house solutions designed to destroy any personal information that is out there. Our IT Solutions, specifically our NSA-listed Degausser,  SEM Model- EMP1000- HS stands as the premier degausser in the market today. This degausser offers destruction with one click, destroying the binary magnetic field that stores your end-of-life data. SaaS companies can feel secure knowing their data is destroyed by an NSA-approved government data destruction model. While an NSA-listed destruction solution isn’t always necessary for SaaS companies, it is secure enough for the US Government, so we can assure you it’s secure enough to protect your end-of-life data, too.

Whether your data is government-level or commercial, it is important to ensure data security, which is where SEM wants to help. There is an option for everyone at SEM, with a variety of NSA-listed degaussers, IT crushers, and IT shredders to protect your end-of-life data. Further your security measures today by finding out which data solutions work best for you.

Data Centers: Every Square Foot Counts

November 15, 2023 at 1:30 pm by Amanda Canale

In the vast and complex world of data centers, the maximization of space is not just a matter of practicality; it is a crucial aspect that has the power to directly affect a facility’s efficiency, sustainability, flow of operations, and, frankly, financial standing.

Today, information isn’t just power, but rather it serves as the lifeblood for countless industries and systems, making data centers stand as the literal bodyguards of this priceless resource. With the ever-expanding volume of data being generated, stored, and processed, the effective use of space within these centers has become more critical than ever.

In layman’s terms, every square foot of a data center holds tremendous value and significance.

Now, we’re not here to focus on how you can maximize the physical space of your data center; we’re not experts in which types of high-density server racks will allow you more floor space or which HVAC unit will optimize airflow.

What we are going to focus on is our expertise in high-security data destruction, an aspect of data center infrastructure that holds an equal amount of value and significance. We’re also going to focus on the right questions you should be asking when selecting destruction solutions. After all, size and space requirements mixed with compliance regulations are aspects of a physical space that need to be addressed when choosing the right solution.

So, we are posing the question, “When every square foot counts, does an in-house destruction machine make sense?”

Let’s find out.

Data Center IT Specialist and System administrator Talk, Use Tablet Computer, Wearing Safety Wests. Server Clod Farm with Two Information Technology Engineers checking Cyber Security.

The Important Questions

Let’s start off with the basic questions you need to answer before purchasing any sort of in-house data destruction devices.

What are your specific destruction needs (volume, media type, compliance regulations, etc.) and at what frequency will you be performing destruction? 

The first step in determining if an in-house destruction solution is the right move for your facility is assessing your volume, the types of data that need to be destroyed, and whether you will be decommissioning on a regular basis. Are you only going to be destroying hard drives? Maybe just solid state media? What about both? Will destruction take place every day, every month, or once a quarter?

It’s important to also consider factors such as the sensitivity of the data and any industry-specific regulations that dictate the level of security required. Additionally, a high volume of data decommissioning might justify the investment in in-house equipment, while lower-volume needs might require a different kind of solution.

How much physical space can you allocate for in-house equipment?

By evaluating the available square footage in a data center, facility management can ensure that the space allocated for the data destruction equipment is not only sufficient for the machinery but will also allow for efficient workflow and compliance with safety regulations. The dimensions for all of our solutions can be found on our website within their respective product pages.

What is your budget for destruction solutions?

Determining budget constraints for acquiring and maintaining in-house data destruction equipment will allow you to consider not only the upfront costs but also ongoing expenses such as maintenance, training, and potential upgrades. It’s important to note that, in addition to evaluating your budget for ­in-house equipment, the comparison between an in-house solution and cost of a data breach should also be taken into consideration.

All of the answers to these questions will help determine the type of solution (shredder, crusher, disintegrator, etc.), the compliance regulation it should meet (HIPAA, NSA, NIST, etc.), the physical size, and if there should be any custom specifications that should be implemented. 

Warning icon on a digital LCD display with reflection. Concept of cyber attack, malware, ransomware, data breach, system hacking, virus, spyware, compromised information and urgent attention.

Data Breaches: A Recipe for Financial Catastrophes

One of the primary reasons why every square foot counts within data centers is the financial element. Building and maintaining data center infrastructures often come with significant expenses, ranging from real estate and construction to cooling, power supply, and hardware installations, just for starters. It’s important to ensure that you are maximizing both your physical space and your budget to get the most bang for your buck.

But even beyond the physical constraints and considerations, the financial implications can loom overhead, especially in the context of data security.

Data breaches represent not just a threat to digital security but also a financial consequence that can reverberate for years. The fallout from a breach extends far beyond immediate remediation costs, encompassing regulatory fines, legal fees, public relations efforts to salvage a damaged reputation, and the intangible loss of customer trust.

For example, from January to June 2019, there were more than 3,800 publicly disclosed data breaches that resulted in 4.1 billion records being compromised. And according to the IBM and Ponemon Institute report, the cost of an average data breach in 2023 is $4.45 million, a 15% increase over the past three years.

So, while, yes, you want to make sure you are making the best use out of your budget to bring in the necessary equipment and storage capability to truly use up every square foot of space, part of that budget consideration should also include secure in-house solutions. 

You’re probably saying to yourself, “As long as I can outsource my destruction obligations, I can maximize my physical space with said necessary equipment.”

You’re not wrong.

But you’re not necessarily right, either.

The Hidden Costs of Outsourced Data Destruction

Outsourcing data destruction has traditionally been a common practice, with the aim of offloading the burden of secure information disposal. However, as we’ve stated in previous blogs, introducing third party data sanitization vendors into your end-of-life decommissioning procedures can gravely increase the chain of custody, resulting in a far higher risk of data breaches.

Third-party service contracts, transportation costs, and potential delays in data destruction contribute to an ongoing financial outflow. More so, the lack of immediate control raises concerns about the security of sensitive information during transit. For example, in July 2020, the financial institution Morgan Stanley came under fire for an alleged data breach of their clients’ financial information after an IT asset disposition (ITAD) vendor misplaced various pieces of computer equipment that had been storing customers’ sensitive personally identifiable information (PII).

While ITADs certainly have their role within the data decommissioning world, as facilities accumulate more data, and as the financial stakes continue to rise, the need to control the complete chain of custody (including in-house decommissioning) becomes more and more crucial. 

In-House Data Destruction: A Strategic Financial Investment 

Now that your questions have been answered and your research has been conducted, it’s time to (officially) enter the realm of in-house data destruction solutions – an investment that not only addresses security concerns but aligns with the imperative to make every square foot count. 

It’s crucial that we reiterate that while the upfront costs associated with implementing an in-house destruction machine may appear significant, they must be viewed through the lens of long-term cost efficiency and risk mitigation. 

In the battle against data breaches, time is truly of the essence. In-house data destruction solutions provide immediate control over the process, reducing the risk of security breaches during transportation and ensuring a swift response to data disposal needs. This agility becomes an invaluable asset in an era where the threat landscape is continually evolving. In-house data destruction emerges not only as a means of maximizing space but as a financial imperative, offering a proactive stance against the potentially catastrophic financial repercussions of data breaches. 

Whether your journey leads you to a Model 0101 Automatic Hard Drive Crusher or a DC-S1-3 HDD/SSD Combo Shredder, comparing the costs of these solutions (and their average lifespan) to a potential data breach resulting in millions of dollars, makes your answer that much simpler: by purchasing in-house end-of-life data destruction equipment, your facility is making the most cost-effective, safest, and securest decision.

You can hear more from Ben Figueroa, SEM Global Commercial Sales Director, below.

The Hidden Heroes: Environmental Solutions for Data Centers

October 30, 2023 at 3:31 pm by Amanda Canale

Behind the scenes of our increasingly interconnected world, lie the hidden heroes of today’s data centers — environmental controls.  

Data centers must be equipped with a multitude of environmental controls, ranging from electricity monitoring and thermal control to air flow and quality control and fire and leak suppression, all of which play pivotal roles in maintaining an optimal environment for data centers to operate effectively and sufficiently.

Embracing compliance regulations and standards aimed at reducing energy consumption and promoting sustainability is an essential step towards a data center’s greener future (not to mention a step towards a greener planet).

Electricity Monitoring

It’s a no-brainer that the main component of a data center’s ability to operate is electricity. In fact, it’s at the center of, well, everything we do now in the digital age.

It is also no secret that data centers are notorious for their high energy consumption, so managing their electricity usage efficiently is essential in successfully maintaining their operations. Not to mention that any disruption to the supply of electricity can lead to catastrophic consequences, such as data loss and service downtime. With electricity monitoring, data centers can proactively track their consumption and identify any service irregularities in real time, allowing facilities to mitigate risk, reduce operational costs, extend the lifespan of their equipment, and guarantee uninterrupted service delivery.

The Role of Uptime Institute’s Tier Classification in Electrical Monitoring

The Uptime Institute’s Tier Classification and electricity monitoring in data centers are intrinsically linked as they both play pivotal roles in achieving optimal reliability and efficiency. The world-renowned Tier Classification system provides data centers with the framework for designing and evaluating their infrastructure based on four stringent tiers. Tier IV is the system’s most sophisticated tier, offering facilities 99.995% uptime per year, or less than or equal to 26.3 minutes of downtime annually.

Utilizing the Tier Classifications in their electricity monitoring efforts, data centers can fine-tune their power infrastructure for peak efficiency, reducing energy waste and operating costs along the way.

Read more about the vitality of the Uptime Institute’s Tier Classification in our recent blog, here.

Thermal and Humidity Control 

The temperature and humidity within a data center’s walls hold significant value in maintaining the operational efficiency, sustainability, and integrity of a data center’s IT infrastructure.  

Unfortunately, finding that sweet spot between excessive dryness and high moisture levels can be a bit tricky. 

According to the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE), data centers should aim to operate between 18 – 27oC, or 64.4 – 80.6 oF; however, it’s important to note that this range is just a recommendation and there are currently no mandates or compliance regulations detailing a specific temperature.

Meanwhile, AVTECH Software, a private computer hardware and software developer company, suggest a data center environment should maintain ambient relative humidity within 45-55%, with a minimum humidity rate of 20%. 

Thankfully, due to the exponential rise in data centers over time, there are countless devices available to monitor both temperature and humidity levels.

Striking the right balance in thermal and humidity levels helps safeguard the equipment and maintain a reliable, stable, and secure data center environment. Efficient cooling systems help optimize energy consumption, reducing operational costs and environmental impact, whereas humidity controls prevent condensation, static electricity buildup, and electrostatic discharge, which can damage the more delicate components. 

Air Flow Management and Quality Control

Here’s a question for you: ever be working late on your laptop with a bunch of windows and programs open, and it starts to sound like it’s about to take off for space?

That means your laptop is overheating and is lacking proper airflow.

Air flow management and air quality control serve as two sides of the same coin: both contribute to equipment reliability, energy efficiency, and optimal health and safety for operators.

Air Flow Management 

Regardless of their scale, when data centers lack proper airflow management, they can easily become susceptible to hotspots. Hotspots are areas within data centers and similar facilities that become excessively hot from inadequate cooling, ultimately leading to equipment overheating, potential failures, and, even worse, fires. Not only that, but inefficient air flow results in wasted energy and money and requires other cooling systems to work overtime.

By strategically arranging specially designed server racks, implementing hot and cold aisle containment systems, and installing raised flooring, data centers can ensure that cool air is efficiently delivered to all their server components while hot air is effectively pushed out. While meticulous and stringent, this level of management prolongs the lifespan of expensive hardware and gravely reduces energy consumption, resulting in significant cost savings and environmental benefits. 

Air Quality Control

Airborne contaminants, such as dust, pollen, and outside air pollution, can severely clog server components and obstruct airflow, leading to equipment overheating and failures and eventually other catastrophic consequences. Not to mention, chemical pollutants from cleaning supplies and other common contaminants such as ferrous metal particles from printers and various mechanical parts, concrete dust from unsealed concrete, and electrostatic dust all play a role in corroding sensitive and critical circuitry.

Air quality control systems, including advanced air filtration and purification technologies, help maintain a pristine environment by removing these airborne particles and contaminants. These additional systems allow facilities to extend their server and network equipment lifespans, operate at peak efficiency, and reduce the frequency of costly replacements and repairs, all while contributing to data center reliability and data security.

Fire Suppression 

The significance of fire suppression in data centers lies in the ability to quickly and effectively prevent and combat fires, ultimately minimizing damage and downtime. Due to the invaluable data, assets, and infrastructure within data centers, these suppression systems are designed to detect and put out fires in their earliest stages to prevent them from spreading and escalating. 

Data centers use a variety of cutting-edge technologies such as early warning smoke detection, heat sensors, water mist sprinkler systems, smoke and fire controlling curtains, and even clean agents like inert gases, which leave no residue, thus further safeguarding the integrity of the sensitive equipment.

Causes of Fires in Data Centers

Electrical failures are the most common cause for data center fires, and often stem from overloaded circuits, equipment malfunctions, and defective wiring. They can also be started by electrical surges and arc flashes, otherwise known as an electrical discharge that is ignited by low-impedance connections within the facility’s electrical system.

Lithium-ion Batteries have a high energy density and are typically placed near a facility’s servers to ensure server backup power in the case of a main power failure. However, lithium-ion batteries burn hotter than lead-acid batteries, meaning that if they overheat, their temperature can trigger a self-perpetuating reaction, further raising the batteries’ temperatures.

Insufficient maintenance such as failing to clean and repair key data center components, such as servers, power supplies, and cooling systems can quickly lead to dust and particle accumulation. Dust, particularly conductive dust, when allowed the time to build up on these components, can potentially cause short circuits and overheating, both which can lead to a fire.

Human error is inevitable and can play a large part in data center fires and data breaches, despite all of the advanced technologies and safety measures in place. These types of errors range from improper equipment handling, poor cable management, inadequate safety training, overloading power sources, and more.

 Leak Detection

Remember when we said that it is no secret that data centers are notorious for their high energy consumption? The same can be said for their water usage. 

On average, data centers in the U.S. use approximately 450 million gallons of water a day in order to generate electricity and to keep their facilities cool. Any kind of failure within a data center’s cooling system can lead to a coolant leak, which can further lead to catastrophic consequences, such as costly downtime, data loss, and irreparable damage to their expensive equipment. 

Leak detection systems’ role is of extreme importance in safeguarding data centers because they promptly identify and alert facility staff to any leaks that can cause water damage to critical servers, networking equipment, and other valuable assets. Raised floors also act as a protective barrier against potential water damage, for they keep sensitive equipment elevated above the floor, reducing the risk of damage and downtime.

The Role of SEM

Data centers operate in controlled environments and have state-of-the-art air quality and flow management systems to achieve equipment reliability, energy efficiency, and optimal health and safety for operators. This much we know.

What we also know is just how important in-house data decommissioning is to maintaining data security. In-house data decommissioning is the process of securely and ethically disposing of any data that is deemed “end-of-life,” allowing enterprises to keep better control over their data assets and mitigate breaches or unauthorized access. 

So, how does in-house data decommissioning play into a data center’s environmental controls?

Well, the process of physically destroying data, especially through techniques like shredding or crushing, can often release fine particle matter and dust into the air. This particle matter can potentially sneak its way into sensitive equipment, clog cooling systems, and degrade the facility’s overall air quality, like we discussed earlier.

At SEM, we have a wide range of data center solutions for the destruction of hard disk drives (HDDs) and solid state drives (SSDs) that are integrated with HEPA filtration, acting as a crucial barrier against airborne contaminants. HEPA filtration enhances air quality, improving operator and environmental health and safety.

Conclusion

Temperature and humidity control, air quality and airflow management, fire suppression, and leak detection all work together to create a reliable and efficient environment for data center equipment. Combined with stringent physical security measures, power and data backup regulations, compliance mandates, and proper documentation and training procedures, data center operators can ensure uninterrupted service and protect valuable data assets. 

As technology continues to evolve, the importance of these controls in data centers will only grow, making them a cornerstone of modern computing infrastructure.

You can hear more from Todd Busic, Vice President of Sales, and other members of our team below.

 

Data Center Efficiency Starts with Proper Documentation and Training

October 12, 2023 at 8:00 am by Amanda Canale

At the rate at which today’s technology is constantly improving and developing, the importance of thorough, accurate documentation and training cannot be overstated. After all, data centers house and manage extremely critical infrastructure, hardware, software, and invaluable data, all of which require routine maintenance, overseeing, upgrading, configuration, and secure end-of-life destruction.

One way to view documentation in data centers is that it serves as the thread tying together all the diverse data and equipment that play a crucial role in sustaining these facilities: physical security, environmental controls, redundancies, documentation, training, and more.

Simply put, the overarching theme of proper documentation within data centers is that it provides clarity.

Clarity in knowing where every piece of equipment is located and what state it is in.

Clarity when analyzing existing infrastructure capacities.

Clarity on regulatory compliance during audits.

Clarity on, well, every aspect of a data center’s functionality, to be completely honest.

But, before we dive into the benefits of proper documentation, first things first: what does proper documentation look like?

  • Work instructions and configuration guides;
  • Support ticket logs to track issues, either from end-users or in-house;
  • Chain-of-custody and record of past chains-of-custody to know who is authorized to handle which assets and who manages or oversees equipment and specific areas;
  • Maintenance schedules;
  • Change management systems that track where each server is and how to access it;
  • And most importantly, data decommissioning process and procedures.

This is by no means an exhaustive list of all the necessary documentation data centers should retain, but these few items provide perfect examples of what kind of documentation is needed to keep facilities functioning efficiently. 

Now that you have a better idea of what kind of critical documentation should be maintained, let’s dive into the benefits (because that is, in fact, why you’re here reading this!).

Organization and Inventory Management

Documentation provides a clear and up-to-date picture of all the hardware, software, and infrastructure components within a data center. This includes servers, networking equipment, storage devices, and more. By maintaining accurate records of each component’s specifications, location within the facility, and status, data center managers and maintenance personnel can easily identify their available resources, track their usage, and plan for upgrades or replacements as needed.

Knowledge Preservation and Training Development

In any data center, knowledge is a priceless asset. Documenting configurations, network topologies, hardware specifications, decommissioning regulations, and other items mentioned above ensures that institutional knowledge is not lost when individuals leave the organization. (So, no need to panic once the facility veteran retires, as you’ll already have all the information they have!)

This information becomes crucial for staff, maintenance personnel, and external consultants to understand every facet of the systems quickly and accurately. It provides a more structured learning path, facilitates a deeper understanding of the data center’s infrastructure and operations, and allows facilities to keep up with critical technological advances.

By creating a well-documented environment, facilities can rest assured knowing that authorized personnel are adequately trained, and vital knowledge is not lost in the shuffle, contributing to overall operational efficiency and effectiveness, and further mitigating future risks or compliance violations. 

Knowledge is power, after all! 

Enhanced Troubleshooting and Risk Mitigation 

Understanding how to mitigate risks is fundamental to maintaining data center performance. In the event of an issue or failure (no matter how minor), time is of the essence. Whether it is a physical breach, an environmental disaster, equipment reaching end-of-life, or something entirely different, the quick-moving efforts due to proper documentation expedite the troubleshooting and risk mitigation process. This allows IT staff to identify the root cause of a problem and take appropriate corrective actions as soon as possible, ultimately minimizing downtime and ensuring that critical systems are restored promptly. 

Expansion and Scalability 

As we continue to accumulate more and more data, the need for expanding and upgrading data centers also continues to grow. Proper documentation provides the proper training and skills to plan and execute expansions (whether it’s adding new hardware, optimizing software, reconfiguring networks, or installing in-house data decommissioning equipment), insights into existing capacities, potential areas for growth, and all other necessary upgrades. This kind of foresight is invaluable for efficient scalability and futureproofing. Additionally, trained personnel can adapt to these evolving requirements with confidence and ease, boosting morale and efficiency.

Regulatory Compliance Mandates

In today’s highly regulated climate, data centers are subject to a myriad of industry-specific and government-imposed regulations, such as GDPR, HIPAA, PCI DSS, NSA, and FedRAMP (just to name a few). These regulations demand stringent data protection, security, and destruction measures, making meticulous documentation a core component of complying to these standards.

By documenting data center policies, procedures, security controls, and equipment destruction, data centers can provide a clear trail of accountability. This paper trail helps data center operators track and prove compliance regulations by showcasing the steps taken to safeguard sensitive data and maintain the integrity of operations—both while in-use and end-of-life. Not to mention, a properly documented accountability trail can simplify audits and routine inspections, allowing comprehensive documentation to serve as tangible evidence that the necessary safeguards and protocols are in place.

And as we covered earlier in this blog, documentation aids in risk mitigation, offering a proactive approach to allow facilities to rectify issues before they become compliance violations, thereby reducing legal and financial risks associated with non-compliance.

Furthermore, documentation ensures transparency and accountability within an organization, fostering a culture of compliance awareness among data center staff and encouraging best practices. When everyone understands their role in maintaining compliance and can reference documented procedures, the likelihood of unexpected errors or violations decreases significantly.

Data Decommissioning Documentation and the Role of SEM

Documentation provides a comprehensive record of not only the equipment’s history, but includes its configuration, usage, and any sensitive data it may have housed. Now, as mentioned above, depending on the type of information that was stored, it falls subject to specific industry-specific and government-imposed regulations, and the decommissioning process is no different.

When any data center equipment reaches the end of its operational life, proper documentation plays a crucial role in ensuring the secure and compliant disposal of these assets. This documentation is essential for verifying that all necessary data destruction procedures have been followed in accordance with regulatory requirements and industry best practices, allowing for transparency and accountability throughout the entire end-of-life equipment management process and reducing the risk of data breaches, legal liabilities, and regulatory non-compliance. 

At SEM, our mission is to provide facilities, organizations, and data centers the necessary high security solutions to conduct their data decommissioning processes in-house, allowing them to keep better control over their data assets and mitigate breaches or unauthorized access. We have a wide range of data center solutions designed to swiftly and securely destroy any and all sensitive information your data center is storing, including the SEM iWitness Media Tracking System and the Model DC-S1-3. 

The iWitness tool was created to document the data’s chain of custody and a slew of crucial details during the decommissioning process, including date and time, destruction method, serial and model number, operator, and more, all easily exported into one CSV file.

The DC-S1-3 is a powerhouse. This robust system was specifically designed for data centers to destroy enterprise rotational/magnetic drives and solid state drives. This state-of-the-art solution is available in three configurations: HDD, SSD, and a HDD/SSD Combo, and uses specially designed saw tooth hook cutters to shred those end-of-life rotational hard drives to a consistent 1.5″ particle size. The DC-S1-3 series is ideal for the shredding of HDDs, SSDs, data tapes, cell phones, smartphones, optical media, PCBs, and other related electronic storage media.  

These solutions are just three small examples of our engineering capabilities. With the help of our team of expert engineers and technicians, SEM has the capability and capacity to custom build more complex destruction solutions and vision tracking systems depending on your volume, industry, and compliance regulation. Our custom-made vision systems are able to fully track every step of the decommissioning process of each and every end-of-life drive, allowing facilities to have a detailed track record of the drive’s life. For more information on our custom solutions, visit our website here.

Conclusion

In conclusion, the significance of proper documentation and training cannot be overstated. These two pillars form the foundation upon which the efficiency, reliability, and security of a data center are built.

Proper documentation ensures that critical information about the data center’s infrastructure, configurations, and procedures is readily accessible, maintained, and always up-to-date. Documentation aids in organization and inventory management, knowledge preservation, troubleshooting, and compliance, thereby minimizing downtime, reducing risks, and supporting the overall operational performance of the data center.

In the same vein, comprehensive training for data center personnel is essential for harnessing a facility’s full potential. It empowers staff with the knowledge and skills needed to operate, maintain, and adapt to the evolving demands of a data center, giving them the power and confidence to proactively address issues, optimize performance, and contribute to the data center’s strategic objectives.

As technology continues to advance and data centers become increasingly critical to businesses, investment in proper documentation and training remains an indispensable strategy for ensuring a data center’s continued success and resilience in an ever-changing digital world.

The Critical Imperative of Data Center Physical Security

September 12, 2023 at 8:00 am by Amanda Canale

In our data-driven world, data centers serve as the backbone of the digital revolution. They house an immense amount of sensitive information critical to organizations, ranging from financial records to personal data. Ensuring the physical security of data centers is of paramount importance. After all, a data center’s physical property is the first level of security. By meeting the ever-evolving security mandates and controlling access to the premises, while maintaining and documenting a chain of custody during data decommissioning, data centers ensure that only authorized personnel have the privilege to interact with and access systems and their sensitive information.

Levels of Security Within Data Centers

Before any discussion on physical security best practices for data centers can begin, it’s important to think of data center security as a multi-layered endeavor, with each level meticulously designed to strengthen the protection of data against potential breaches and unauthorized access. 

Data centers with multi-level security measures, like Google and their six levels of data center security, represent the pinnacle of data infrastructure sophistication. These facilities are designed to provide an exceptional level of reliability and high security, offering the utmost advances in modern day security, ensuring data remains available, secure, and accessible. 

Below we have briefly broken down each security level to offer an inside peek at Google’s advanced security levels and best practices, as they serve as a great framework for data centers. 

  • Level 1: Physical property surrounding the facility, including gates, fences, and other more significant forms of defenses.
  • Level 2: Secure perimeter, complete with 24/7 security staff, smart fencing, surveillance cameras, and other perimeter defense systems.
  • Level 3: Data center entry is only accessible with a combination of company-issued ID badges, iris and facial scans, and other identification-confirming methods.
  • Level 4: The security operations center (SOC) houses the facility’s entire surveillance and monitoring systems and is typically managed by a select group of security personnel.
  • Level 5: The data center floor only allows access to a small percentage of facility staff, typically made up solely of engineers and technicians.
  • Level 6: Secure, in-house data destruction happens in the final level and serves as the end-of-life data’s final stop in its chain of custody. In this level, there is typically a secure two-way access system to ensure all end-of-life data is properly destroyed, does not leave the facility, and is only handled by staff with the highest level of clearance.

As technology continues to advance, we can expect data centers to evolve further, setting new, intricate, and more secure standards for data management in the digital age.

Now that you have this general overview of best practices, let’s dive deeper.

Key Elements of Data Center Physical Security

Effective data center physical security involves a combination of policies, procedures, and technologies. Let’s focus on five main elements today:

  • Physical barriers
  • Surveillance and monitoring
  • Access controls and visitor management
  • Environmental controls
  • Secure in-house data decommissioning
Physical Barriers

Regardless of the type of data center and industry, the first level of security is the physical property boundaries surrounding the facility. These property boundaries can range widely but typically include a cocktail of signage, fencing, reinforced doors, walls, and other significant forms of perimeter defenses that are meant to deter, discourage, or delay any unauthorized entry.  

Physical security within data centers is not a mere addendum to cybersecurity; it is an integral component in ensuring the continued operation, reputation, and success of the organizations that rely on your data center to safeguard their most valuable assets.

Surveillance and Monitoring

Data centers store vast amounts of sensitive information, making them prime targets for cybercriminals and physical intruders. Surveillance and monitoring systems are the vigilant watchdogs of data centers and act as a critical line of defense against unauthorized access. High-definition surveillance and CCTV cameras, alarm systems, and motion detectors work in harmony to help deter potential threats and provide real-time alerts, enabling prompt action to mitigate security breaches.

Access Controls and Visitor Management

Not all entrants are employees or authorized visitors. Access controls go hand-in-hand with surveillance and monitoring; both methods ensure that only authorized personnel can enter the facility. Control methods include biometric authentication, key cards, PINs, and other secure methods that help verify the identity of individuals seeking entry. These controls, paired with visitor management systems, allow facilities to control who may enter the facility, and allows staff to maintain logs and escort policies to track the movements of guests and service personnel. These efforts minimize the risk of unauthorized access, and by preventing unauthorized access, access controls significantly reduce the risk of security breaches.

Under the umbrella of access controls and visitor management is another crucial step in ensuring that only authorized persons have access to the data: assigning and maintaining a chain of custody. 

But what exactly is a chain of custody?

A chain of custody is a documented trail that meticulously records the handling, movement, and access, and activity to data. In the context of data centers, it refers to the tracking and documenting of data assets as they move within the facility, and throughout their lifecycle. A robust chain of custody ensures that data is always handled only by authorized personnel. Every interaction with the data, whether it’s during maintenance, migration, backup, or destruction, is documented. This transparency greatly reduces the risk of unauthorized access or tampering, enhancing overall data security and helps maintain data integrity, security, and compliance with regulations.

Environmental Controls

Within the walls of data centers, a crucial aspect of safeguarding your digital assets lies in environmental controls, so facilities must not only fend off human threats but environmental hazards, as well. As unpredictable as fires, floods, and extreme temperatures can be, data centers must implement robust environmental control systems as they are essential in preventing equipment damage and data loss. 

Environmental control systems include, but are not limited to:

  • Advanced fire suppression systems to extinguish fires quickly while minimizing damage to both equipment and data.
  • Uninterruptible power supplies (UPS) and generators ensure continuous operation even in the face of electrical disruptions.
  • Advanced air filtration and purification systems mitigate dust and contaminants that can harm your equipment, keeping your servers and equipment uncompromised. 
  • Leak detection systems are crucial for any data center. They are designed to identify even the smallest amount of leaks and trigger immediate responses to prevent further damage.

These systems are the unsung heroes, ensuring the optimal conditions for your data to (securely) thrive and seamlessly integrate with physical security measures.

In-House Data Decommissioning

While there’s often a strong emphasis on data collection and storage (rightfully so), an equally vital aspect in data center security is often overlooked—data decommissioning. In-house data decommissioning is the process of securely and responsibly disposing of any data considered “end-of-life,” ultimately empowers organizations to maintain better control over their data assets. Simply put, this translates to the physical destruction of any media that is deemed end-of-life by way of crushing for hard disk drives (HDDs), shredding for paper and solid state drives (SSDs), and more. 

When data is properly managed and disposed of, organizations can more effectively enforce data retention policies, ensuring that only relevant and up-to-date information is retained. This, in turn, leads to improved data governance and reduces the risk of unauthorized access to sensitive data.

In-house data decommissioning ensures that sensitive data is disposed of properly, reducing the risk of data leaks or breaches. It also helps organizations comply with data privacy regulations such as GDPR and HIPAA, which often require stringent secure data disposal practices.

Physical Security Compliance Regulations

We understand that not all compliance regulations are a one-size-fits-all solution for your data center’s security needs. However, the following regulations can still offer invaluable insights and a robust cybersecurity framework to follow, regardless of your specific industry or requirements. 

ISO 27001: Information Security Management System (ISMS)

ISO 27001 is an internationally recognized standard that encompasses a holistic approach to information security. This compliance regulation covers aspects such as physical security, personnel training, risk management, and incident response, ensuring a comprehensive security framework.

When it comes to physical security, ISO 27001 provides a roadmap for implementing stringent access controls, including role-based permissions, multi-factor authentication, and visitor management systems, and the implementation of surveillance systems, intrusion detection, and perimeter security. Combined, these controls help data centers ensure that only authorized personnel can enter the facility and access sensitive areas. 

Data centers that adopt ISO 27001 create a robust framework for identifying, assessing, and mitigating security risks. 

ISO 27002: Information Security, Cybersecurity, and Privacy Protection – Information Security Controls

ISO 27002 offers guidelines and best practices to help organizations establish, implement, maintain, and continually improve an information security management system, or ISMS. While ISO 27001 defines the requirements for an ISMS, ISO 27002 provides the practical controls for data centers and organizations to implement so various information security risks can be addressed. (It’s important to note that an organization can be certified in ISO 27001, but not in ISO 27002 as it simply serves as a guide. 

While ISO 27002’s focus is not solely on physical security, this comprehensive practice emphasizes the importance of conducting thorough risk assessments to identify vulnerabilities and potential threats in data centers, which can include physical threats just as much as cyber ones. Since data centers house sensitive hardware, software, and infrastructure, they are already a major target for breaches and attacks. ISO 27002 provides detailed guidelines for implementing physical security controls, including access restrictions, surveillance systems, perimeter security and vitality of biometric authentication, security badges, and restricted entry points, to prevent those attacks.

Conclusion

In an increasingly digital world where data is often considered the new currency, data centers serve as the fortresses that safeguard the invaluable assets of organizations. While we often associate data security with firewalls, encryption, and cyber threats, it’s imperative not to overlook the significance of physical security within these data fortresses. 

By assessing risks associated with physical security, environmental factors, and access controls, data center operators can take proactive measures to mitigate said risks. These measures greatly aid data centers in preventing unauthorized access, which can lead to data theft, service disruptions, and financial losses. Additionally, failing to meet compliance regulations can result in severe legal consequences and damage to an organization’s reputation.

In a perfect world, simply implementing iron-clad physical barriers and adhering to compliance regulations would completely eliminate the risk of data breaches. Unfortunately, that’s simply not the case. Both data center security and compliance encompass not only both cybersecurity and physical security, but secure data sanitization and destruction as well. The best way to achieve that level of security is with an in-house destruction plan. 

In-house data decommissioning allows organizations to implement and enforce customized security measures that align with their individual security policies and industry regulations. When data decommissioning is outsourced, there’s a risk that the third-party vendor may not handle the data with the same level of care and diligence as in-house teams would.

Throughout this blog, we’ve briefly mentioned that data centers should implement a chain of custody, especially during decommissioning. In-house data decommissioning and implementing a data chain of custody provide data centers the highest levels of control, customization, and security, making it the preferred choice for organizations that prioritize data protection, compliance, and risk mitigation. By keeping data decommissioning within their own control, organizations can ensure that their sensitive information is handled with the utmost care and security throughout its lifecycle.

At SEM, we have a wide range of data center solutions designed for you to securely destroy any and all sensitive information your data center is storing, including the SEM iWitness Media Tracking System and the Model DC-S1-3. 

The iWitness is a tool used in end-of-life data destruction to document the data’s chain of custody and a slew of crucial details during the decommissioning process. The hand-held device reports the drive’s serial number, model and manufacturer, the method of destruction and tool used, the name of the operator, date of destruction, and more, all easily exported into one CSV file. 

The DC-S1-3 is specifically designed for data centers to destroy enterprise rotational/magnetic drives and solid state drives. This state-of-the-art solution uses specially designed saw tooth hook cutters to shred those end-of-life rotational hard drives to a consistent 1.5″ particle size. This solution is available in three configurations: HDD, SSD, and a HDD/SSD Combo. The DC-S1-3 series is ideal for the shredding of HDDs, SSDs, data tapes, cell phones, smartphones, optical media, PCBs, and other related electronic storage media. 

The consequences of improper data destruction are endless, and statute of limitations don’t apply to data breaches. No matter what the industry, purchasing in-house, end-of-life data destruction equipment is well worth the investment. This can in turn potentially save your data center more time and money in the long run by preventing breaches early on.

Data Centers and NIST Compliance: Why 800-53 is Just the Start

August 22, 2023 at 4:42 pm by Amanda Canale

The world of data storage has been exponentially growing for the past several years and shows no signs of slowing down. From paper to floppy disks, HDDs to SSDs, and large servers to cloud-based infrastructures, the way we store data has become increasingly intricate using the latest and greatest major technological advancements. 

As the way we store our data continues to evolve, it’s becoming increasingly vital for data centers, federal agencies, and organizations alike to implement proper and secure data cybersecurity and information security practices, and appropriate procedures for secure data sanitization and destruction. Data center compliance is essential for various reasons, primarily centered around ensuring the security, integrity, and reliability of their data and systems. By complying with industry standards and regulations, data centers can safeguard sensitive data and ensure that proper security measures are in place to prevent unauthorized access, data breaches, and cyberattacks – both while data storage devices are in use and when they reach end-of-life. 

In summary, data center compliance falls under both cybersecurity and physical security best practices, and secure data sanitization and destruction. For a data center to operate at optimal performance and security, one cannot be without the other.

When discussing data center compliance, it’s important to not leave out an important player: the National Institute of Standards and Technology (NIST). NIST is one of the most widely recognized and adopted cybersecurity frameworks, is the industry’s most comprehensive and in-depth set of framework controls, and is a non-regulatory federal agency. NIST’s mission is to educate citizens on information system security for all applications outside of national security, including industry, government, academia, and healthcare on both a national and global scale. 

Their strict and robust standards and guidelines are widely recognized and adopted by both data centers and government entities alike seeking to improve their processes, quality, and security. 

In today’s blog, I want to dive into the two most important NIST publications data centers should consistently reference and implement into their security practices: NIST 800-88 and NIST 800-53. Both standardizations help create consistency across the industry, allowing data centers to communicate and collaborate with, and more effectively protect partners, clients, and regulatory bodies. Again: cybersecurity and destruction best practices go hand-in-hand, and should be implemented as a pair in order for a data center to operate compliantly. 

Step 1: Data Center Security and Privacy Framework

NIST 800-53

NIST 800-53 provides guidelines and recommendations for selecting and specifying security and privacy controls for federal information systems and organizations. While NIST 800-53 is primarily utilized by federal agencies, its principles and controls are widely recognized and adopted as a critical resource for information security and privacy management, not only by federal agencies but also by private sector organizations, international entities, and more importantly, data centers. 

NIST 800-53 serves as a comprehensive catalog of security and privacy controls that data centers can use to design, implement, and assess the security posture of their IT systems and infrastructure, all of which are crucial in sustaining a data center. The controls are related to data protection, encryption, data retention, and data disposal, and serve as a valuable resource for data centers looking to establish intricate and well-rounded cybersecurity and information security programs. 

NIST 800-53 addresses various aspects of information security, such as access control, incident response, system and communications protection, security assessment, and more. Each control is paired with specific guidelines and implementation details. These security controls, of which there are over a thousand, are further categorized into twenty “control families” based on their common objectives. (For example, access control controls are grouped together, as are incident response controls, and so forth.) These control families cover various aspects of security, including access control, network security, system monitoring, incident response, and more, offering data centers much higher rates of uptime and ability to minimize downtime.

Since data centers often handle sensitive and valuable information, they require robust physical security measures to prevent breaches and unauthorized access. NIST 800-53 addresses physical security controls, including access controls, video surveillance, intrusion detection systems, and environmental monitoring, which are vital in protecting the data center’s infrastructure.

It’s important to mention that while NIST 800-53 provides an increasingly valuable foundation for securing data center operations, organizations may need to tailor the controls to their specific environments, risk profiles, and compliance requirements. NIST 800-53 offers a flexible framework that allows for customization to suit the unique needs of different data center operators, making it a vital and critical resource.

Step 2: Data Destruction Compliance 

NIST 800-88

First published in 2006, NIST 800-88 and its Guidelines for Media Sanitization provides guidance and regulations on how citizens can conduct the secure and proper sanitization and/or destruction of media containing sensitive, classified, and top secret information. NIST 800-88 covers various types of media, including hard drives (HDDs), solid-state drives (SSDs), magnetic tapes, optical media, and other media storage devices. NIST 800-88 has quickly become the utmost standard for the U.S. Government and has been continuously referenced in federal data privacy laws. More so, NIST 800-88 regulations have been increasingly adopted by private companies and organizations, especially data centers. The main objective is to help data centers and organizations establish proper procedures for sanitizing media before its disposal at end-of-life.

When a data center facility or section is being decommissioned, equipment such as servers, storage devices, and networking gear must be properly sanitized and disposed of. NIST 800-88’s guidelines help data center operators develop procedures to securely handle the removal and disposal of equipment without risking future data breaches 

When it comes to sanitizing media, NIST 800-88 offers three key methods:

  1. Clearing: The act of overwriting media with non-sensitive data to prevent data recovery.
  2. Purging: A more thorough and comprehensive method that will render the stored data unrecoverable using advanced technology, such as cryptographic erasure and block erasing.
  3. Destruction: The physical destruction of a storage device either by way of shredding, crushing, disintegrating, or incineration. This often includes electromagnetic degaussing, a method that produces a buildup of electrical energy to create a magnetic field that scrambles and breaks the drive’s binary code, rendering it completely inoperable. The strength of the degausser is critical when eliminating sensitive information from magnetic media. Typically, degaussers evaluated and listed by the National Security Agency (NSA) are considered the golden standard. 

However, even these methods can come with their own drawbacks. For instance: 

  1. Clearing: For sensitive, classified, or top secret information, clearing or overwriting should never serve as the sole destruction method. Overwriting is only applicable to HDDs, not SSDs or Flash, and does not fully remove the information from the drive. 
  2. Purging: Unfortunately, purging methods are highly prone to human error and are a very time-consuming process.
  3. Destruction: Once the drive has been destroyed, it cannot be reused or repurposed. However, this method provides the assurance and security that the data is fully unrecoverable, the process can take mere seconds, and there is no room for human error.

The chosen destruction and/or sanitization method depends on the sensitivity of the information on the media and the level of protection required, so it is crucial that data centers and organizations take into account the classification of information and media type, as well as the risk to confidentiality. NIST 800-88 provides valuable guidance on media sanitization practices, which are crucial for data centers to ensure the secure disposal of data-filled devices while minimizing the risk of data breaches. Proper implementation of NIST guidelines allows data center officials to protect sensitive information and maintain data security throughout the lifecycle of data center equipment.

The Importance of Verification 

NIST guidelines, specifically NIST 800-88, have become the industry standard when it comes to secure data sanitization; however, they are not as definitive as other regulatory compliances. With NIST, the responsibility of data sanitization falls onto data centers’ or an agency’s chief information officers, system security managers, and other related staff.

As discussed above, the destruction and/or sanitization method depends on the sensitivity of the information on the media and the level of protection required, so it is critical to the security of the end-of-life data that organizations discuss the matters of security categorization, media chain of custody including internal and external considerations, and the risk to confidentiality.

Regardless of the method chosen, verification is the next critical step in the destruction and sanitization process. NIST verification typically refers to the process of validating or verifying compliance with standards, guidelines, or protocols established by the data center and/or organization. By NIST 800-88 standards, verification is the process of testing the end-of-life media to see if the stored information is accessible. 

For sanitization equipment to be verified, it must be subjected to testing and certification, such as the NSA evaluation and listing, and must abide by a strict maintenance schedule. For proper sanitization, the device must be verified through a third party testing should the media be reused. However, when media is destroyed, no such verification is necessary, as the pulverized material itself is verification enough. 

Since third party testing can be impractical, time consuming, and a gateway to data breaches, we at SEM always push for the in-house sanitization and destruction of media as the only choice to ensure full sanitization of data and the only way to mitigate future risks. When destroying data in-house, companies can be positive that the data is successfully destroyed. 

Conclusion

When it comes to data center compliance and security, there is no one-stop-shop. Adhering to both NIST 800-88 and 800-53 guidelines enhances the reputation of data centers by demonstrating a commitment to data security and privacy. This can help build trust with clients, customers, and stakeholders, leading to stronger business relationships. More importantly, these guidelines are necessary when collecting, storing, using, or destroying certain data. NIST provides educational resources, training materials, and documentation that help data center staff understand security concepts and best practices, empowering data center personnel to implement effective security measures.

At SEM, we have a wide range of NSA listed and noted solutions and CUI/NIST 800-88 compliant devices designed for you to securely destroy sensitive information. After all, the consequences of improper data destruction are endless and there is no statute of limitations on data breaches. No matter what the industry, purchasing in-house, end-of-life data destruction equipment is well worth the investment. Need us to craft a custom solution for your data center? You can find out more here. 

Uptime Institute’s Tier Classification: Everything You Need to Know

July 25, 2023 at 7:01 pm by Amanda Canale

Just as Security Engineered Machinery has been the global standard when it comes to high security data destruction solutions, the Uptime Institute’s Tier Classification has served as the international standard for data center performances. The classification evaluates data centers’ server hosting availability and reliability, and for the past 25 years, the Uptime Institute has had over 2,800 certifications in over 114 countries across the globe.

With the Uptime Institute’s Tier Classification, comes four tiers that are centered on data center infrastructure and define the criteria needed for maintenance, power, cooling, and fault capabilities: Tiers I, II, III, and IV.

Before we dive into the Uptime Institute’s Tier Classification, I want to run through some data center vocabulary:

Uptime

Uptime is the annual amount of time that a data center is guaranteed to be available and running. This time increases in degrees of “nines,” or a 99% availability guarantee. A data center with 99.671% uptime offers far less availability and reliability than one that has 99.982% uptime. 

Essentially, a data center wants to achieve as many “nines” as possible. A 99.9% availability (or “three nines”) will still allow for approximately eight hours of downtime per year. If a data center has 99.999% (“five nines”) then they have less than six minutes of downtime per year, or approximately twenty-six seconds per month.

Downtime

Downtime is the annual amount of time that a data center and its availability will be interrupted. Downtime can occur for a number of reasons: routine maintenance, hardware failures, natural disasters, cyberattacks, and the most common, human error. 

Whenever a data center experiences downtime, there’s a cost: according to the ITIC’s 11th Annual Hourly Cost of Downtime Survey, an hour of downtime can cost some firms and corporations anywhere from $1 to $5 million, not including any potential legal fees, fines, and penalties. The more downtime a data center has, the higher the risk they run of data breaches due to the lack of or little protection and security monitoring they have during this time. It’s also important to mention that downtime not only affects the data center employees: downtime prevents outside customers and clients form accessing services and information, too. So even if a data center experiences downtime that does not result in a data breach, it can have very real monetary and reputational consequences.

Redundancy

Redundancy is a data center component designed to duplicate primary resources and power in the case of failure. These fail-safe systems can be in the form of backup generators, uninterruptible power systems (UPS), and cooling systems, to ensure that data centers can continue to run if another component fails.

Now, let’s dive into each tier!

Tier I

Tier I is a data center at its most basic level of availability. This first tier offers no guarantee of redundancy and at a minimum, offers data centers an UPS for power spikes, lags, and outages. Most small businesses and warehouses that lack around-the-clock operations with minimal power operate at a Tier I level. Tier I facilities operate on a single distribution path for power and cooling, which can easily be overloaded or fall susceptible to planned and unplanned disruptions. In return, Tier I offers 99.671% redundancy, meaning that there is a maximum of 28.8 hours of downtime per year, allowing a lot of vulnerable room for any kind of disruption and subsequent breach. 

Tier II

Tier II facilities offer a bit more uptime, with a 99.741% rating, equaling no more than 22 hours of downtime per year. Like Tier I facilities, Tier II’s operate on a single distribution path for power and cooling but offer other options for maintenance and disruption mitigation. Some of these features include engine generators, cooling units, pumps, and heat rejection equipment. While not by much, this little bump in availability can guarantee data center’s reliability, but it still does not fully protect them from unexpected shutdowns.

Tier III

Unlike Tier I and II facilities, Tier III’s are generally utilized by larger businesses and offer more than one redundant distribution path, meaning that the infrastructure has the capacity and availability to fully support the IT load and offer backup to ensure performance and reliability. This spike in reliability allows for 99.982% of uptime, resulting in less than or equal to 1.6 hours of downtime per year.

While this tier is significantly more reliable, it is not completely fault tolerant. Tier III allows for routine maintenance without impacting service, but are still vulnerable to outages, spikes, and power lags. 

Tier IV

Tier IV is the most sophisticated tier and is typically used by enterprise corporations. This tier offers twice the operational capacity (or 2N) as well as additional backup components (or +1), for ultimate reliability. In this tier, every critical component of the data center’s primary infrastructure is duplicated and fired at max capacity, meaning that even in a disruption, operations are able to continue. 

Tier IV facilities offer a 99.995% uptime per year, or less than or equal to 26.3 minutes of downtime. While this level of classification can be the most expensive to implement, it is the one generally populated by government organizations and larger enterprise corporations.

data-protection-officer

Conclusion

The Uptime Institute’s Tier Classification demonstrates that in any data center setting and scale, it is absolutely vital to have redundancies in place in order to have the lowest amount of down time possible. Data centers should strive to reach the highest tier in order to maintain their high levels of performance, availability, and reliability.

In equal vitality, ultimate data center security also requires a detailed and clear data decommissioning program as part of their operations plan to ensure other safety, security, and operational safeguards are in place. The best way to achieve that level of security is with an in-house destruction plan for HDDs, SSDs, and other data center media types. When implemented improperly, data centers can fall subject to breaches and experience extreme financial loss and irredeemable public trust. At SEM, we offer NIST 800-88 compliant degaussers, crushers, and shredders that are versatile enough to fit in any environment and scale along with auditing and documentation systems. 

Since our inception in 1967, SEM has served as the industry leader in high security, comprehensive end-of-life data destruction solutions that ensure the protection of sensitive, classified, and top secret information within the government, intelligence community, and commercial markets. Our solutions are specifically designed and manufactured to comply with the most frequently cited and stringent of regulatory requirements and compliance mandates, including the National Security Agency’s (NSA) Evaluated Product List (EPL) — which is used to determine if a data destruction device is approved to destroy the US Government’s top secret and classified materials. 

Over the years, many data centers have pivoted to having the most secure data-decommissioning policy, in-house destruction. By using devices like the SEM 0300 shredder line, EMP1000-HS degausser, 2SSD, and iWitness documentation tool – data centers data is more secure than ever when the drives reach end of life.  

The fact of the matter is: the further we get into the Digital Age, the more criticality there is in protecting our most sensitive of data. Corporations, businesses, and enterprises all require a data center that can deliver reliability comparable to their uptime requirements and an in-house data destruction plan.