Mastering in-memory data technology doesn’t have to be an ever elusive concept to grasp and at Data Summit Viktor Gamov, senior solutions architect at Hazelcast, outlined what attendees need to know before diving into a low-latency in-memory project.
Gamov presented his session “Mastering In-Memory Data Technology” during the “In-Memory Revolution” track at Data Summit 2017.
The drivers for in-memory computing consist of accessing fast big data, real-time latency, situational awareness, spotting the right business moments, and scaling up and out to support the largest Internet use cases.
Hardware trends are at the epicenter of driving in-memory computing, with the adoption of 64 bit programs that address adding more memory.
Multi-core servers, cheaper RAM, and faster networking are helping users adopt and take the plunge with in-memory computing.
Distributed data applications users should be aware of include:
- Application Scaling
- Database Caching
- Distributed Computing
- Reactive / Smart Clients
Important different types of application scaling include:
- Elastic Scalability
- Super Speeds
- High Availability
- Fault Tolerance
- Cloud Readiness
- EC2, GCE, Docker deployment
The goal is to make in-memory computing reliable, scalable, and durable, Gamov said.
Many conference presentations have been made available by speakers at www.dbta.com/datasummit/2017/presentations.aspx.