Subscribe to the online version of Database Trends and Applications magazine. DBTA will send occasional notices about new and/or updated DBTA.com content.
Trends and Applications
With the General Data Protection Regulation (GDPR) deadline quickly approaching in May, many organizations are scrambling to get their customer information systems in order to meet the requirements. Any company that collects and processes the personal data of European citizens and residents—whether it is names, IP addresses, photos, videos, health and biometric info, or other types of data—will be impacted.
Bitcoin was the hot topic in financial and tech circles in 2017. The decentralized digital currency jumped from just under $1,000 at the start of the year to a $20,000 all-time high in December. The real game-changer here isn't Bitcoin, though, but the technology that powers it, known as blockchain. Despite volatility and suspicion marking the crypto-market, the blockchain concept could be a truly revolutionary innovation.
The impact of cognitive computing technologies—including artificial intelligence (AI) and machine learning (ML)—is increasingly being felt in data centers and database operations of all sizes, across all industries. Research shows that AI and related cognitive technologies are no longer just experiments conducted by computer or data scientists—they are part of a real-world technology wave that is already showing tangible business results.
"Digital transformation" is a term that can both unite and divide audiences in a single conversation. Delve into the topic at a conference or a cocktail party, and people within earshot will quickly agree that it's a good thing—an initiative that every organization should undertake if it wants to stay competitive. But just as quickly, the same people will start a debate about what digital transformation really is and what an organization should do to make it happen.
Today's global economy runs on data. Every day, individuals are providing data, while companies collect it. The rapid growth and volume of this data has been overwhelming and nearly impossible to manage. But, as the value of data increases, so does the interest from outside, including potentially nefarious sources. What's required are newer, specific safeguards that address the myriad of data challenges brought about by the digital age.
Digitization is transforming business faster than ever before—with software and technology now deeply ingrained in the core of organizations' operations and business functions, rather than siloed in IT. While companies reap the benefits of digital transformation, not every organization is prepared for the double-edged sword that comes with the widespread implementation of software and technology: audits.
If you're looking for a popular database management system (DBMS) platform, Microsoft SQL Server is a solid choice. Research from Gartner shows it's among the most widely deployed platforms in the category—second only to Oracle—with more than 20% of the DBMS market's $34.4 billion total. SQL Server also is experiencing rapid growth. Industry data from 2016 shows that SQL Server revenue rose more than that of Oracle and even of the market overall, with 10.3% growth.
Columns - Database Elaborations
For data architects, it is not unusual to use a data modeling tool to reverse-engineer existing solutions' databases. The reverse-engineering could occur for a functional reason, such as gathering information to evaluate a replacement option, or to comprehend a solution, seeking to work out what data should be extracted for a downstream business intelligence need.
Columns - DBA Corner
Data lake is a newer IT term created for a new category of data store. But just what is a data lake? According to IBM, "a data lake is a storage repository that holds an enormous amount of raw or refined data in native format until it is accessed." That makes sense. I think the most important aspect of this definition is that data is stored in its "native format." The data is not manipulated or transformed in any meaningful way; it is simply stored and cataloged for future use.
Columns - Quest IOUG Database & Technology Insights
As a DBA, all of my best ideas came out of IOUG. I have spent a lot of time after sessions and during gatherings at COLLABORATE talking with my peers about things I was working on or stumped on at work. I have often been shocked at how many times someone had an answer so quickly. But I have figured out that they weren't DBA gods, but rather someone similar to me. They have just had the same issue I had but sooner and had worked on it for hours. I was just getting the benefit of all their time.
Columns - SQL Server Drill Down
Women's issues have headlined the news media for the past several months. Many stories, ranging from the #MeToo movement to the "Brogrammer" email blast at Google, have shown that women in technology (WIT) face negative work conditions and social pressures. In light of that, it seemed appropriate to dive into the topic of WIT within the SQL Server community (aka, the Data Platform community) and gauge where we're at.
Columns - Next-Gen Data Management
Moving to Automation Means Many Decisions
Columns - Emerging Technologies
The serverless computing architecture—sometimes called function as a service or FaaS—hides not just the underlying virtual machine, but also the application server itself. The cloud simply agrees to execute your code on demand or in response to an event.
Duplicates are often the exact opposite of what we want. The discovery of one's evil clone, for instance, would be far from an ideal scenario in most people's minds. In all seriousness, though, one hardly needs look to science fiction for examples of how the proliferation of duplicates is a common nuisance of our modern lives. Multiple envelopes in your mail containing the exact same bill, for example, or multiple copies of the exact same file cluttering up your hard drive are common and annoying problems—even in our digital age. Indeed, in the world of computer programming there are entire disciplines dedicated to optimizing applications or data structures through the elimination of unnecessary duplicates.