Information Management News

Sponsors

DBTA 100 2018 - The Companies That Matter Most in Data

Innovative vendors are helping to point the way forward with technologies and services to take advantage of the wealth of data that is pouring into companies. This sixth DBTA 100 list spans a wide variety of companies that are each addressing the evolving demands for hardware, software, and services. Some are long-standing companies with well-established offerings that have evolved over time, while others are much newer to the data scene. Read More

10 Things You Need to Know about GDPR

At Data Summit 2018 in Boston just 2 days before GDPR went into effect, Kristina Podnar, digital policy consultant, NativeTrust Consulting, LLC, presented a talk on what you need to know to become compliant with the new regulation—and maintain compliance. In her presentation, Podnar cut through the legal and regulatory noise, taking the 99 articles of GDPR and distilling them into 10 key areas so attendees could identify the things that they really need to deal with—and those that do not apply. Read More

12 Key Takeaways about Data and Analytics from Data Summit 2018

Data Summit 2018 was recently held in Boston. Big data technologies, AI, analytics, cloud, and software licensing best practices in a hybrid world were among the key areas considered during three days of thought-provoking presentations, keynotes, panel discussions, and hands-on workshops. Read More

2018: The Year of the Graph is Declared at Data Summit

As organizations undergo digital transformation to analyze and query large amounts of data at high speeds, they are increasingly leveraging graph databases to illuminate information about connections. The result is that 2018 will be a big year for graph technology, according to Sean Martin, CTO of Cambridge Semantics, and Scott Heath, CRO, Expero. Read More

Newsletters


Columnists

Todd Schraml

Database Elaborations

Todd Schraml

  • Physical Data Models Are Not Necessarily One-For-One With Logical Under usual circumstances, the one-to-many or many-to-many relationship, alone, drives the pattern used within the database model. Certainly, the logical database model should represent the proper business semantics of the situation. But on the physical side, there may exist extenuating circumstances that would cause a data modeler to consider including an associative table construct for a one-to-many relationship.
Recent articles by Todd Schraml
Craig S. Mullins

DBA Corner

Craig S. Mullins

  • Who Owns the Data? With all of the data breaches and accusations of improper data usage in the news these days, the question of who owns data looms large. Understanding who owns which data is a complex question that can't be answered quickly or easily.
Recent articles by Craig S. Mullins
Kevin Kline

SQL Server Drill Down

Kevin Kline

  • Big News in the Microsoft Cloud If you've used Azure in the past, you probably know that there are two main ways to deploy SQL Server on Microsoft's cloud—Azure SQL Database, the PaaS offering; and Azure VMs running SQL Server. Microsoft is now offering a third deployment option in preview which provides full SQL Server engine capability, including SQL Agent, along with native VNet support.
Recent articles by Kevin Kline
Guy Harrison

Emerging Technologies

Guy Harrison

  • Transactions Come to MongoDB It may seem strange to see MongoDB expanding the very features of the relational databases that it originally rejected. In the last few releases, we've seen implementation of joins, strict schemas, and now ACID transactions. However, what this indicates is that MongoDB is increasingly contending for serious enterprise database workloads: MongoDB is expanding the scope of its ambitions.
Recent articles by Guy Harrison
Rob Mandeville

Next-Gen Data Management

Rob Mandeville

  • Pros and Cons of a Data Services Layer Our data capture and retention requirements continue to grow at a very fast rate, which brings new entrants in the SQL and NoSQL market all the time. However, not all data is created equal. Companies recognize that disparate data can and should be treated differently. That means the way we persist that data can be extremely varied. Now, enter applications that need to access all that data across a very heterogeneous landscape, and we get to the point where we're reinventing the data access wheel every time someone needs to spin up another application or introduce another data source.
Recent articles by Rob Mandeville

Trends and Applications