Data transmission is growing rapidly, and the digitization of everything from financial transactions to video is enabling organizations to quickly share information with global partners both inside and outside their trusted network. However, many organizations do not recognize the operational, financial and security risks associated with this growing proliferation of perceived secure, user-managed file transfer systems.
Quick—how many different ways is sensitive data flowing out of your organization? Think you know?
You’ve patched your servers, upgraded security policies and locked down your users so that you are finally compliant with Sarbanes-Oxley, the Health Insurance Portability and Accountability Act (HIPAA) or the Payment Card Industry Security Standard (PCI-DSS). But, unless you’ve got a robust strategy to address your managed file transfer needs, you still may not be in compliance—and you may even be opening your data and systems to increased security risks. As the size and volume of data movement operations continue to grow, so does the likelihood of failure.
Globalization, digitization, broadband proliferation—these are the three big trends that have driven managed file transfer from the secured back room of bank transfers and data center backups to the forefront of data security and business flexibility challenges. As data movement continues to grow both within and between enterprises and amid today’s mix of technology backdrops, it is becoming ever more important to effectively manage data transmissions according to both business priorities and security policies. In addition, global trading partners with differing levels of information technology (IT) infrastructure maturity have necessitated the use of the Internet to transport much of this data. For many, file transfer via free file transfer protocol (FTP) over the Internet looks like the perfect solution. But beware—the distributed nature of the Internet tends to hide the overall impact and costs associated with the interruptions, network and security failures.
Quick—how many FTP servers in your organization are sending and receiving critical or sensitive business data? Who is sending it, and what is the cost of failure?
A global energy company recently performed a full audit of all data and file transmission systems in use and discovered that it had more than 70 different systems, with very little overall ability to standardize business rules, security policies, logging and notifications, or data auditing. Sound familiar? At best, unmanaged data movement can result in unproductive use of network resources. At worst, it can result in noncompliance or security breaches.
Whether you are moving funds to the Federal Reserve, sending computer-aided design (CAD) drawings to your manufacturing partner, submitting daily store receipts to headquarters or sending prerelease product specifications to an ad agency, the principles of managing critical and sensitive data movement both within and outside your organization remain the same. To get control of data movement in your organization, you must have a strategy that addresses the following:
Consistent Security Enforcement
Globalization requires the flexibility to quickly onboard and interoperate with trading partners with varying degrees of IT maturity—including those that use the Internet for file transfer. For this reason, securing your file transfer systems needs to be proxy-based and moved “out to the edge.” For example, in common FTP implementations, usernames and passwords are sent in clear text and cannot be shared between servers, while security auditing is virtually nonexistent. A proxy-based security paradigm, however, provides full policy enforcement and supplemental functions such as session breaks, protocol analysis and rules-based process management.
To create a secure file transfer environment, a file transfer proxy should:
* Support multiple protocols such as FTP, FTPS, HTTP, HTTPS
* Enforce firewall navigation best practices with no inbound holes in the firewall and no files stored in the demilitarized zone (DMZ)
* Prevent direct communications between internal and external sessions via session breaks
* Authenticate incoming connections using the Secure Sockets Layer (SSL) or Transport Layer Security (TLS) protocol, and exchange and validate the trading partner prior to establishing a session
* Guard against common attacks to ensure business continuity
Multifactor authentication and additional certificate checks will provide even more security. Key capabilities should include:
* CRL Checking, which validates certificates against one or more Certificate Revocation Lists (CRLs) located on a Lightweight Directory Access Protocol (LDAP) server
* Standard Certificate Validation, which allows standard certificate validation functions, including enforcement of valid dates and validation of issuer signature
* Application Policy Enforcement, which allows validation of certificate extensions using custom formulas
* LDAP queries against Certificate content, which allows you to query against the contents of a Certificate
* Additional validation for CRL checking, LDAP validations and custom lookups
Notification and Recovery
A robust data movement infrastructure requires numerous hardware and software resources. The reliability of the entire structure needed for data movement operations is the multiplicative product of the reliability of each component. For example, if you have 50 components and each has a reliability of 0.9999, then the reliability of the entire infrastructure is 0.995. That means that there is a five out of 1,000 chance of a failure for each operation. A robust file transfer strategy will ensure that notification of critical exceptions and error conditions is instantaneous, that messaging routing is flexible and that historical logging is deployed. Data transmission errors can be costly to the network and time-consuming and disruptive to the business. For mission-critical or time-sensitive data transmissions, automated recovery and retransmission is a must.
By using scripting, scheduling and application integration in your file transfer systems, you can eliminate human error and ensure critical business processes operate smoothly. Often, human error is not discovered until much time has passed, incrementally driving up cost and risk. Lack of automation in file transfer systems can result in operations that complete successfully but are not usable due to the lack of validation of user-selected parameters or options. Additionally, without central automation of scheduled activities, user-initiated data movement risks conflict with network or other critical resources.
Put the “Manage” Back into Managed File Transfer
To begin the process of taking back control of your file transfer systems and processes, perform an analysis of the total cost of operations and ownership, analyzing key categories such as:
* Operations and use of network resources
* Security and enforcement, including risks of noncompliance and/or breach
* Cost of lack of notification
* Cost of data/process recovery
* Cost savings of automation
With this data in hand, I suspect that you, like the global energy company mentioned earlier, will conclude that investment in robust file transfer strategies and tool sets can significantly reduce both the operational costs and risks associated with the data transfer demands of the business. Getting a robust file transfer management strategy and tool set in place now will help ensure that your organization’s next mention in the news is for innovation in business practice—not for a process breakdown, a security breach or the inadvertent release of sensitive data.
About the Author:
Geoffrey Baird is vice president, Global Managed File Transfer, Sterling Commerce, a provider of software and services for enterprise and multi-enterprise integration of data, files and processes as well as applications for multi-channel selling and fulfillment and payments management. For more information about Sterling Commerce, go to