Mar 11
Defense Contracts and the CCMC

The US Department of Defense (DoD) released the Cybersecurity Maturity Model Certification (CMMC) on January 31, 2020 as a unified standard for implementing cybersecurity across the defense industrial base and includes over 300,000 companies. The CMMC is the DoD's response to the significant number of compromises of sensitive CUI data that contained defense information located on contractors' information systems. In order for contractors to be eligible for DoD contract awards they are required to have the CMMC certification.

Contractors are responsible for implementing and monitoring their information technology systems and any sensitive DoD information stored on those systems. The CMMC framework guides companies with the appropriate levels of cybersecurity practices and processes to protect Federal Contract Information (FCI) and Controlled Unclassified Information (CUI) within their unclassified networks.

The CMMC consists of five certification levels to best implement cybersecurity based practices.

  • CMMC Level 1. Basic best cyber hygiene practices, sensitive data management.
  • CMMC Level 2. Protect Controlled Unclassified Information (CUI)
  • CMMC Level 3. Practices to safeguard CUI, including the NIST 800-171
  • CMMC Level 4. Practices using advanced persistent threats (APT) techniques and procedures
  • CMMC Level 5. In place sophisticated capabilities to detect and respond to APTs

 

Controlled Unclassified information (CUI), is information that government agencies and some of their contractors are required to both mark and classify within their data stores. CUI represents a particular kind of sensitive data created by the U.S. federal government or developed on its behalf and merits special protection against exposure.

As result of the CMMC and the contractual agreements between contractors and the DoD, assessors must understand the contractors response capabilities by knowing which systems store CUI data that may not be within policy. When it comes time to prove that CMMC controls are in place, you must be able to audit your systems, generate comprehensive reports, and review audit reports in detail. To do so will require a robust and accurate vended data discovery toolset.

Avoid Loss of DoD Contracts

A typical government contract is around $250,000 and without this certification there is substantial risk losing contracts. To reduce the loss of contracts and/or potential for a data breach as they relate to data that contain CUI, DFARS 7012, NIST 800-171/172 and the CMMC, it's necessary to identify the locations which store sensitive data assets processing Controlled Unclassified Information (CUI).

Conducting regular CUI risk or breach damage assessments is time intensive and doing so manually is not attainable. It's necessary to use an industry trusted data discovery tool that provides the necessary technologies to accurately locate common types of PII and CUI. These automated tools reduce the overall time spent locating documents with common categories or markings that may be in scope of the CMMC.

The U.S. governments rule for protecting CUI includes marking documents (classifying) to indicate the protected status. The National Archives Records Administration (NARA) issued a handbook on marking best practices in 2016 and cites the proper organizational markings and categories to consider when looking for CUI.

  • Categories
  • Banner Marking: Specified Authorities
  • Category Marking
  • Organizational Index Grouping

 

CMMC compliance will help reduce the potential loss of contracts. Using discovery tools to accurately locate these types of data are core to the CMMC. Before the concept of CUI was introduced in 2008, documents that contained sensitive defense information such as schematics, reports, and other technical data were marked with an array of acronyms that were indicative of its protected status, such as For Official Use Only (FOUO) and Sensitive But Unclassified (SBU). However since the introduction and executive order, NARA was put in charge to better facilitate standards across the DoD.

The Right Tool for the CMMC

Maintaining a good alignment with the CMMC is about using the right set of tools; there is no one single security tool that can do it all. Spirion is one such tool that identifies both PII and CUI across structured and unstructured data by searching text and images for common PII or searching for phrases, words, and acronyms that are indicative of CUI. This toolset is fundamental to assisting the compliance with the CMMC via its data discovery and classification capabilities.

Success will be achieved through accurate and automated process's to identify and classify sensitive data as it relates to the CMMC such as CUI. By conducting regular CUI risk assessment throughout the business's information ecosystem, the implementation of data classification policy by imbedding labels into documents and files will help delineate their sensitivity and facilitate the protection of unauthorized and unintended transfers and publication of CUI.

Feb 23
PCI Compliance Makes Dollars and Sense

Protecting sensitive data is a challenging task. Between the complexities of the data itself and the legal implications surrounding an alphabet soup of data privacy regulations, too many organizations struggle to develop protection strategies. Visibility of the data is one of areas that is most difficult to accomplish, yet vital to meet compliance.

For organizations that accept credit card payments, compliance with the Payment Card Industry Data Security Standard (PCI DSS) is a must. "Maintaining payment security is required for all entities that store, process or transmit cardholder data," the PCI Security Standards Council explained. PCI DSS "set the technical and operational requirements for organizations accepting or processing payment transactions, and for software developers and manufacturers of applications and devices used in those transactions."

The PCI Security Standards demonstrate that data discovery is foundational, core to the assessment for a PCI audit. There are twelve requirements all designed to put protection of consumer PII first. The requirements include a multitude of security controls placed on those devices storing sensitive PCI data.

The Cost of non-Compliance

PCI compliance continues to be a challenge, only 27.9% of organizations achieving 100% compliance during their interim compliance validation, according to the Verizon 2020 Payment Security Report. Compliance should not be seen as "checkbox" activity but rather an everyday activity to protect sensitive data.

The average cost of a data breach is $3.86 million, according to the IBM and Ponemon Institute Cost of a Data Breach 2020 report. When consumer PII, the very data PCI DSS is designed to protect, is compromised, it will cost a company $150 per record in the breach. Data breaches also result in a loss of reputation and consumer confidence. Consumers don't like having credit cards replaced regularly because a company failed at protecting sensitive information, and they will take their business elsewhere. According to the Deloitte Global Survey on Reputation Risk, on average 25% of a company's market value is directly attributable to its reputation, loss of revenue, and the impact of not being able to process payment card transactions.

It's not just data breaches and reputational loss as result of that cost for failing PCI compliance. Companies not meeting regulations are fined thousands of dollars each month of non-compliance. There are also legal costs to consider during the remediation processes and the inability to process payment card transactions.

PCI compliance comes at a cost. The size and scope of your organization, the overall security posture of the company, and whether or not there is dedicated staff handling PCI compliance will all factor into the cost of setting up and maintaining mechanisms for PCI standards.

Why Accurate Audits Matter

PCI audits can be costly, because they require the company having the right process and tools in place. Audits are time consuming and stressful for your security and data privacy teams, but they are vital to protecting both the company and customers. Knowledge of which devices store and process sensitive data is vital to reducing PCI costs, as well reducing the potential of breaches because your systems continuously track and "know" the location of sensitive data. Nothing is left unknown.

Accuracy matters when it comes to being able to identify where your PCI data really lives. Not being able to accurately discovery PCI data will impact your overall assessment and add costs to the process. Organizations must have the ability to demonstrate to the auditors (QSA) that data was not located on devices outside the scope of PCI. A PCI audit must validate that the perceived scope of compliance is in fact accurately defined and documented.

Organizations shouldn't view a PCI audit as a point-in-time process, but as an ongoing exercise that demonstrates governance of cardholder data throughout the entirety of the data lifecycle.

Regulations like PCI DSS are designed to protect data privacy, which in turn goes a long way in preventing data breaches. Maintaining awareness of where PCI data resides is crucial to maintaining good consumer privacy practices. While you need to invest upfront with the right data management systems and whatever security tools are needed for compliance, being PCI compliant will pay off in the long run.

Dec 19
Domain Controller Event 3041 Warning

The security of this directory server can be significantly enhanced by configuring the server to enforce validation of Channel Binding Tokens received in LDAP bind requests sent over LDAPS connections. Even if no clients are issuing LDAP bind requests over LDAPS, configuring the server to validate Channel Binding Tokens will improve the security of this server.

For more details and information on how to make this configuration change to the server, please see https://go.microsoft.com/fwlink/?linkid=2102405.

How to set the client LDAP signing requirement by using a domain Group Policy Object

  1. Select Start > Run, type mmc.exe, and then select OK.
  2. Select File > Add/Remove Snap-in.
  3. In the Add or Remove Snap-ins dialog box, select Group Policy Object Editor, and then select Add.
  4. Select Browse, and then select Default Domain Policy (or the Group Policy Object for which you want to enable client LDAP signing).
  5. Select OK.
  6. Select Finish.
  7. Select Close.
  8. Select OK.
  9. Select Default Domain Policy > Computer Configuration > Windows Settings > Security Settings > Local Policies, and then select Security Options.
  10. In the Network security: LDAP client signing requirements Properties dialog box, select Require signing in the list, and then select OK.
  11. In the Confirm Setting Change dialog box, select Yes.
Dec 08
Permanent Account Number (PAN)

PAN is a ten-digit unique alphanumeric number issued by the Income Tax Department. The primary purpose of the PAN is to bring a universal identification to all financial transactions and to prevent tax evasion by keeping track of monetary transactions, especially those of high-net-worth individuals who can impact the economy.

Structure of PAN

The PAN (or PAN number) is a ten-character long alpha-numeric unique identifier.

Example: AAAPZ1234C

  • The first three characters of the code are three letters forming a sequence of alphabets letters from AAA to ZZZ
  • The first five characters are letters in uppercase, followed by four numerals, and the last (tenth) character is a letter.
  • Fourth character [P — Individual or Person ] identifies the type of holder of the card. Each holder type is uniquely defined by a letter from the list below:
    • A — Association of persons (AOP)
    • B — Body of individuals (BOI)
    • C — Company
    • F — Firm
    • G — Government
    • H — HUF (Hindu undivided family)
    • L — Local authority
    • J — Artificial juridical person
    • P — Individual or Person
    • T — Trust (AOP)
  • The fifth character of the PAN is the first character of either:
    • Surname or last name of the person, in the case of a "personal" PAN card, where the fourth character is "P" or
    • of the name of the entity, trust, society, or organization in the case of a company/HUF/firm/AOP/trust/BOI/local authority/artificial judicial person/government, where the fourth character is "C", "H", "F", "A", "T", "B", "L", "J", "G".
  • The last (tenth) character is an alphabetic digit used as a check-sum to verify the validity of that current code.
Aug 27
Classify All Files in a Folder (Local Path)

Introduction

On occasion it may not be possible to simply classify based solely on the actual content of which exists in a file. The scenario arises when all files in a folder may need to be classified as HR, Finance, etc. so that data can be tracked (tagged) back to the source location from which it originated. To persistent classify all these files in a folder the following procedures overview these steps.

Requirements

This process require the following prerequisites in which to make this process possible. These include:

  • Current versions of the SDM product(s).
  • All Files "SDD".
  • "Workflow" rule.
  • Search "Policy".

Important Notes

Note that proceeding with the following "could" override the true content based classifications for any file in these respective locations.

Process

STEP 1 (SDD)

  1. Import or create a RegEx to capture all files in a location using the expression.
  2. From within the Spirion SDM Console Admin > Senstive Data Types > Add > Select Data Type = Regular Expression > Name = Classify All Files > Expression = (\S|\s|\w|\d)*

STEP 2 (Workflow)

  1. Import or create a new workflow from within the Spirion SDM Console Admin > Classification > Add > Name = XXXX.
  2. Select optionally a color, icon, or weight.
    1. Click OK.
  3. Highlight the new Classification, from within the Spirion ribbon Select Rule > Add.
    1. Workflow Rule
      1. Provide a workflow rule name and other options
    2. Definition
      1. Select from the dropdown options the following, see the image below.
        1. "Location", "Contains", Path to folder which all files from within should be classified as XXXX.

    1. Endpoints
      1. Select the appropriate endpoint from where the local files are being searched.
    2. Actions
      1. The Classification tag will be automatically selected, if not from the drop down for "Classify results as:" select the XXXX classification name from step 3. a. i.
      2. Select from the "Execute classification rules:" "Directly on the endpoint".
    3. Click Finish
      1. Bottom right area of the page to save the new rule.

STEP 3 (Search Policies)

  1. Import or create a new workflow from within the Spirion SDM Console Admin > Policies.
  2. Create a new Policy by clicking Policy > Create.
    1. On the Policy Tab provide a name and optionally a description.
      1. For Policy Type select Scheduled Task.
      2. On the Endpoints Tab select the same endpoint chosen for the workflow created previously.
      3. On the Data Types Tab deselect all.
      4. On the Location Tab deselect all.
    2. Click Finish
  3. Select the policy name just created > expand the tree view > expand Search Locations and Select Custom Folders.
    1. Click Add from the the Ribbon
    2. In the new Folder Location field place the same Share name as done in workflow, for example (C:\Location\PoC_Test_Data).
    3. Select "Include in Search" to the right of the folder patch for the Scope.
    4. Click the green check mark to the left of the folder path to save the changes.
  4. Select the policy name just created > expand the tree view > Select Sensitive data Types.
    1. From within the resulting list select "Classify All Files" as created in this document "Step 1, 2.".
  5. Select the policy name just created > expand the tree view > Select "Scheduled Tasks" or "Search > Initiate Search" to search and classify all files from within the target folder location.

Outcomes

As result of the following detailed procedural steps all files from within a folder will be classified as XXXX so that data can be tracked (tagged) back to the source location from which it originated.

Aug 14
Credit Card Validation

In this blog I explain the numerous ways to identify and validate a credit card number (CCN). The main point in this posting is to articulate the complex nature of identifying sensitive data. The complexities of identifying these types of sensitive data manually are not practical thus the need to automate using tools such as "Spirion.com".

Validation of CCN

Below are techniques that can be used to perform cursory checks on CCN's and an explanation of each of the most common validation techniques.

Luhn Algorithm Check

The Luhn Algorithm is a simple checksum formula used to validate a variety of identification numbers, such as credit card numbers and numerous others such as:

  • IMEI numbers
  • US National Provider Identifier (NPI) numbers
  • Canadian Social Insurance Numbers
  • Israeli ID Numbers
  • South African ID Numbers
  • Greek Social Security Numbers (ΑΜΚΑ)
  • McDonald's survey codes
  • Taco Bell receipts
  • Tractor Supply Co. receipts


In addition most credit cards and government identification numbers use this algorithm as a simple method of distinguishing valid numbers from mistyped or otherwise incorrect numbers.

Major Industry Identifier

The first digit of a credit card number represents the category of entity which issued the card.

Issuer identification number

The first six digits of a card number identify the institution that issued the card to the card holder.

Personal Account Number

Digits 7 to final number minus 1 (the last is the checksum) indicate the individual account identifier.

How many digits in a Credit Card Number?

  • Visa and Visa Electron: 13 or 16
  • MasterCard: 16
  • Discover: 16
  • American Express: 15
  • Diner's Club: 14 (including enRoute, International, Blanche)
  • Maestro: 12 to 19 (multi-national Debit Card)
  • Laser: 16 to 19 (Ireland Debit Card)
  • Switch: 16, 18 or 19 (United Kingdom Debit Card)
  • Solo: 16, 18 or 19 (United Kingdom Debit Card)
  • JCB: 15 or 16 (Japan Credit Bureau)
  • China UnionPay: 16 (People's Republic of China)
Jul 13
Migrating a MSSQL DB to AWS RDS MSSQL

To import a MSSQL .bak file from another location into an Amazon AWS RDS MSSQL instance you must follow these instructions, there is currently no other option for RDS MSSQL.

Create an S3 and verify the RDS MSSQL instance can access, this could be accomplished by modifying the VPC appropriately or granting public access to the S3. After this verify you can connect to the instance on port 1433 using SSMS or using telnet to the instance name such as the way mine looked:

  • nameofdb.b6rfjiaj2jhj.us-east-1.rds.amazonaws.com

When assessable run the following query within SSMS to import the .bak from S3 into RDS MSSQL.

exec msdb.dbo.rds_restore_database
@restore_db_name='DBNAME.bak',
@s3_arn_to_restore_from='arn:aws:s3:::s3bucketname/DBNAME.bak',
@with_norecovery=0,
@type='FULL';

 

You can track the following import process status using the following Native Tracking of Process guide.

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.Procedural.Importing.html#SQLServer.Procedural.Importing.Native.Tracking

If you receive this error during import you must create an RDS "Option Group".

Msg 50000, Level 16, State 0, Procedure msdb.dbo.rds_restore_database, Line 80 [Batch Start Line 0]
Database backup/restore option is not enabled yet or is in the process of being enabled. Please try again later.
USAGE:
EXECUTE msdb.dbo.rds_restore_database @restore_db_name, @s3_arn_to_restore_from, [@kms_master_key_arn], [@type], [@with_norecovery]
@restore_db_name
: Name of the database being restored
@s3_arn_to_restore_from
: S3 ARN of the backup file used to restore database from.
@kms_master_key_arn
: KMS customer master key ARN to decrypt the backup file with.
@type
: The type of restore. Valid types are FULL or DIFFERENTIAL. Defaults to FULL.
@with_norecovery
: The recovery clause to use for the restore operation. Set this to 0, to restore with RECOVERY (database will be online after the restore).
Set this to 1, to restore with NORECOVERY (database will be left in the RESTORING state allowing for subsequent differential or log restores).
For FULL restore, defaults to 0.
For DIFFERENTIAL restores, you must specify 0 or 1.

Navigate to the Amazon RDS portal.

 

Click Options Group > Create Group

  • Provide a non-space, no caps name (For example, mssqlse)
  • Provide description (For example, MSSQL Standard Edition)
  • Choose "sqlserver-se" for Standard Edition MSSQL
  • Choose Engine Version (14.00 in this case)

 

 

  • Click Create

The new Options Group is now displayed in the available Options Groups for your Amazon RDS portal page.

  • Select the newly created Options Group and Add Option.
  • Choose SQLSERVER_BACKUP_RESTORE for the Option Details name.
  • Choose "Create Custom" from the IAM dropdown option.
  • Choose immediately for the Scheduling option.
  • Select Add option.

Back on the Amazon RDS DB Portal Page

  • Select your DB instance and select Modify.
  • Scroll down the Modify DB Instance page to Database Options
    • Change the Option group to your newly created group.
  • Click Continue
  • Under the Schduling of modifications section, select the Apply Immediately.
  • Click Modify DB Instance.

This will associate a group that will permit the import of a database into the AWS RDS MSSQL instance.

I have done this for a Vended application and SharePoint 2019 successfully thus far, Happy importing!

Note - The folowing server-level roles are not available from within the AWS RDS MSSQL instance.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_SQLServer.html

  • bulkadmin
    dbcreator
    diskadmin
    securityadmin
    serveradmin
    sysadmin
Mar 01
Coronavirus COVID-19 Report

​Here is map of the Coronavirus COVID-19 global cases by the Center for Systems Science and Engineering (CSSE) at Johns Hopkins University.  It is a very helpful interactive graphic to better ubderstand the virus trends globally.

https://gisanddata.maps.arcgis.com/apps/opsdashboard/index.html#/bda7594740fd40299423467b48e9ecf6

https://covid-19.splunkforgood.com/coronavirus__covid_19_

Feb 17
Disconnect Bing and Cortana Online Services from Windows 10 Search

Disconnect Bing and Cortana online services from Windows 10 search.

  • Run Regedit.exe
  • Navigate to HKEY_CURRENT_USER > SOFTWARE > Microsoft > Windows > CurrentVersion > Search
  • Look for 'BingSearchEnabled', if you don't see it you will need to create it (right-click in a blank area, pick 'New DWORD' 32 bit. Type in 'BingSearchEnabled'
  • Open BingSearchEnabled, set it to 0, press OK.
  • Look for 'CortanaConsent', again create it if you don't have it using the method above. Also set it to 0.
  • Reboot.

 

Dec 30
Data discovery management to meet governance, regulation and compliance.

Sensitive data has a considerable economic value to businesses and they must manage it appropriately. Protecting that data is of utmost concern because organizations risk substantial fines from regulators or damaging its brand reputation (Yahoo, Marriott, Equifax, etc.). As result, organizations are primarily driven to protect their network because of the sensitive data they store.

At the core of all successful data security management programs are an automated data discovery process which can accurately locate sensitive data so that it can be most effectively managed and protected. Generally most of that sensitive data are stored on workstations, file servers, database's and cloud repositories and the breadth of where data can be stored has grown significantly from just a few years ago. The exceedingly large number of individuals involved in the handling of sensitive data makes appropriate data management very challenging. It's obvious you can't protect these assets unless you know where this sensitive data resides.

In my 20+ years of operational data management and security, the most successful data management and governance programs have included a unified set of information security tools and a governance and compliance process. This process will include many components for its success however the most effective components in this process generally involve a good user awareness of the sensitive data used, the data classification or sensitivity of the data, a data governance process which includes policy, best practices, and standards, and IT systems used to augment data security protection strategies. With these key processes in place an organization can be most successful and meet internal IT policy, governance, regulation, and compliance needs.

IT Data Management

The most effective data management protection programs consist of the ability to report on the location of and types of sensitive data. There isn't a single best process or tool to protect any network however there are numerous technologies including DLP, Firewall, Encryption, Patch Management, IDS, Antivirus and most importantly a Data Discovery mechanism. The most effective approach to good data hygiene and the protection of sensitive data is to know what it is your protecting and to what extent to leverage those IT security processes. With the effective use of a data discovery tool you can complement and extend your data protection stance, the core of all these processes being a data discovery toolset such as the Spirion Sensitive Data Management. The following demonstrates how having a sensitive data discovery process can complement your existing data security posture.​

DLP – These technologies can protect sensitive data in motion, however are inundated with false positives and require considerable resources to manage. Complement and extend the DLP's capabilities by augmenting it with a at rest data discovery tool like Spirion so that the data can be more accurately discovered and leverage the persistent meta tags imbedded in that sensitive data to more accurately accomplish the data in motion protection process.

Firewall – This is the most common type of technology used. However knowing exactly which file server(s) contains the most sensitive (PCI, CCPA, GDPR, PII, etc) data allows the most effective use of FTE resources to more granularly define those firewalls and comply with regulators compensating control requirements. It's imperative to know what it is you're protecting to be successful at protecting access from those users or application that should not have it.

Encryption – Identifying what really needs protected through encryption will reduce the complications of managing various data repositories.

Patch Management – Which storage devices do we need to be more aggressive with patching when a zero day occurs?

Intrusion Detection (IDS) – What sensitive data was on the device when the intrusion occurred!?

Antivirus – What sensitive data was on the device when the malware infected your device with the Trojan six weeks ago?

Sensitive data management is not solely an internal IT security or policy driven process. Effective data management involves people, process, and technology. Good governance of this data is achieved when you can report on where and what types of sensitive data are being stored. Being able to do so will help with avoiding those costly mistakes and damage to a company's reputation.

Compliance and Regulation

This is a People, Process, and Technology driven challenge and the value of sensitive data discovery extends beyond the IT management of data. Data discovery tools are most useful in the compliance and regulation landscape. A data discovery tool will reveal what sensitive data was on a compromised, lost, or stolen device months ago should the situation arise which allows an organization to react more favorably to legal, compliance, or IT security process. Should a PCI audit occur you can work with a QSA to scope those devices under the PCI umbrella. If your organization must comply with GDPR, CCPA, or any number of US state laws and you receive a DSAR request you can act on that. The scope of value gained by having a process at the center of all sensitive data discovery is most valuable.

Data has grown exponentially in size and complexity and having an automated data discovery and reporting process is essential to be successful. New privacy regulations are being implemented around the globe at an increased pace and not all are prescribe identical requirements however all share a common requirement; know where and what sensitive data is being stored. Plan ahead and have a data discovery tool in place!

CCPA will take in effect January 1st of 2020, however you still need to know what and where that data was 1 year previous. GDPR has been in effect since May 25th, 2018 and you have already seen huge fines for noncompliance. The NY SHIELD Act is very similar in scope to CCPA, along with numerous other state data protection laws that have emerged in just the first 6 months of 2019 (See US State Data Protection Laws data sheet).

With so much more to consider in the data management governance, regulation and compliance landscape it can be overwhelming to decide where to start. More to come on that!

I welcome your feedback and look forward to your different perspectives?

1 - 10Next
Copyright © | CoryRetherford, LLC | Contact MeNetwork Storage and Security Solutions, LLC, Rights Reserved.®
TLS 1.2, AES with 256 bit encryption

 ‭(Hidden)‬ Blog Tools