Connect with us

Tips & Tricks

Breaking the Memory wall in MonetDB

Izunna Okpala

Published

on

Breaking the Memory wall in MonetDB

Breaking the Memory wall in MonetDBThe rate of improvement in microprocessor speed exceeds the rate of improvement in DRAM memory speed — each is improving exponentially, but the exponent for microprocessors is substantially larger than that for DRAMs.

 Breaking the Memory wall in MonetDB

The rate of improvement in microprocessor speed exceeds the rate of improvement in DRAM memory speed — each is improving exponentially, but the exponent for microprocessors is substantially larger than that for DRAMs. The difference between diverging exponentials also grows exponentially. Main-memory access has therefore become a performance bottleneck for many computer applications; a phenomenon that is widely known as the “memory wall.” This report focuses on how research around the MonetDB database system has led to a redesign of database architecture in order to take advantage of modern hardware, and in particular to avoid hitting the memory wall. This encompasses (i) a redesign of the query execution model to better exploit pipelined CPU architectures and CPU instruction caches; (ii) the use of columnar rather than row-wise data storage to better exploit CPU data caches; (iii) the design of new cache-conscious query processing algorithms; and (iv) the design and automatic calibration of memory cost models to choose and tune these cache-conscious algorithms in the query optimizer.

 

Introduction of Breaking the Memory Wall in MonetDB

 

The “memory wall” is the growing disparity of speed between CPU and memory outside the CPU chip. An important reason for this disparity is the limited communication bandwidth beyond chip boundaries. While CPU speed improved at an annual rate of 55%, memory speed only improved at 10%. Given these trends, it was expected that memory latency would become an overwhelming bottleneck in computer performance. The equation for the average time to access memory, where tc and tm are the cache and DRAM access times and p is the probability of a cache hit: some conservative assumptions are made:.

tavg = p × tc + (1 – p) × tm

First assumption is that the cache speed matches that of the processor, and specifically that it scales with the processor speed. This is certainly true for on-chip cache, and allows to easily normalize all the results in terms of instruction cycle times (essentially saying tc = 1 CPU cycle). Second, assumption is that the cache is perfect. That is, the cache never has a conflict or capacity miss; the only misses are the compulsory ones. Thus (1-p) is just the probability of accessing a location that has never been referenced before. Now, although (1-p) is small, it isn’t zero. Therefore as tc and tm diverge, tavg will grow and system performance will degrade. In fact, it will hit a wall. In most programs, 20-40% of the instructions reference memory. For the sake of argument let’s take the lower number, 20%. That means that, on average, during execution every 5th instruction references memory. We will hit the wall when tavg exceeds 5 instruction times. At that point system performance is totally determined by memory speed; making the processor faster won’t affect the wall-clock time to complete an application.

Vertical storage: Whereas traditionally, relational database systems store data in a row-wise fashion (which favors single record lookups), MonetDB uses columnar storage which favors analysis queries by better using CPU cache lines.

Bulk query algebra: Much like the CISC versus RISC idea applied to CPU design, the MonetDB algebra is deliberately simplified with respect to the traditional relational set algebra to allow for much faster implementation on modern hardware.

Cache-conscious algorithms: The crucial aspect in overcoming the memory wall is good use of CPU caches, for which careful tuning of memory access patterns is needed. This called for a new breed of query processing algorithms, of which radix-partitioned hash-join is illustrated in some detail.

Memory access cost modeling: For query optimization to work in a cache-conscious environment, a methodology was developed for creating cost models that takes the cost of memory access into account. In order to work on diverse computer architectures, these models are parameterized at runtime using automatic calibration techniques.

 

The Memory Hierarchy:

 

The main memory of computers consists of dynamic random access memory (DRAM) chips. While CPU clock-speeds have been increasing rapidly, DRAM access latency has hardly improved in the past 20 years. Reading DRAM memory took 1–2 cycles in the early 1980s, currently it can take more than 300 cycles. Since typically one in three program instructions is a memory load/store, this “memory wall” can in the worst case reduce efficiency of modern CPUs by two orders of magnitude. Typical system monitoring tools (top, or Windows Task manager) do not provide insight in this performance aspect, a 100% busy CPU could be 95% memory stalled. To hide the high DRAM latency, the memory hierarchy has been extended with cache memories (cf., Figure 1), typically located on the CPU chip itself.

 

The fundamental principle of all cache architectures is reference locality, i.e., the assumption that at any time the CPU repeatedly accesses only a limited amount of data that fits in the cache. Only the first access is “slow,” as the data has to be loaded from main memory, i.e., a compulsory cache miss. Subsequent accesses (to the same data or memory addresses) are then “fast” as the data is then available in the cache. This is called a cache hit. The fraction of memory accesses that can be fulfilled from the cache is called cache hit rate. Cache memories are organized in multiple cascading levels between the main memory and the CPU. They become faster, but smaller, the closer they are to the CPU. In the remainder we assume a typical system with two cache levels (L1 and L2). However, the discussion can easily be generalized to an arbitrary number of cascading cache levels in a straightforward way. In practice, cache memories keep not only the most recently accessed data, but also the instructions that are currently being executed. Therefore, almost all systems nowadays implement two separate L1 caches, a read-only one for instructions and a read-write one for data. The L2 cache, however, is usually a single “unified” read-write cache used for both instructions and data.

 

MONETDB Architecture

The storage model deployed in MonetDB is a significant deviation of traditional database systems. It uses the decomposed storage model (DSM),8 which represents relational tables using vertical fragmentation, by storing each column in a separate <surrogate, value> table, called binary association table (BAT). The left column, often the surrogate or object-identifier (oid), is called the head, and the right column tail. MonetDB executes a low-level relational algebra called the BAT algebra. Data in execution is always stored in (intermediate) BATs, and even the result of a query is a collection of BATs. Figure 2 shows the design of MonetDB as a back-end that acts as a BAT algebra virtual machine, with on top a variety of front-end modules that support popular data models and query languages (SQL for relational data, XQuery for XML). BAT storage takes the form of two simple memory arrays, one for the head and one for the tail column (variable-width types are split into

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Blog

Planning to Study in the US? Attend the EducationUSA Virtual College Fair.

Izunna Okpala

Published

on

The EducationUSA Virtual College Fair starts today…

Have you registered?

Even without registration, you can participate in the College Fair. The program begins today for Graduates from 1-5pm Nigerian time, Thursday, November 19, 2020.

There are 92 accredited universities waiting to meet with you. This is your opportunity to explore the various study and funding opportunities available to graduate students at institutions across the U.S.

To participate in the ONLINE fair visit http://bit.ly/AfricaFair2020.

Watch this short video on how to navigate the website and participate in the fair https://fb.watch/1QCdl4Cb9q/

Tips on how to make the most of the virtual college fair experience

  1. Check Out the Universities– See the list of colleges that will be participating at the virtual fair https://sites.google.com/educationusa.org/africafair2020/home/graduate-fair?authuser=0. Click around to find out what they say about themselves.
  2. Show Your Best Self and Ask Questions– Make a good impression. College representatives love meeting students. Think about how you will introduce yourself and what you want to say, especially about where you’re from and what you hope to find in a college. The virtual fair is for you so do not be afraid to ask questions.
  3. Follow up– Did you meet a University rep who had great things to say? Make note of their name and contact and follow up.

For more info about EducationUSA Lagos follow them on their handles:

Enjoy the fair!

Continue Reading

Articles

A conference on blockchain and health is scheduled to be held at the Africa Blockchain Developers Call.

Izunna Okpala

Published

on

The Africa Blockchain Developers Call (ABDC) Pan-African Bootcamp on blockchain technology has declared its intention to hold a weekend conference on incorporating blockchain technology into Africa’s health sector.

In an attempt to execute comprehensive blockchain training sessions and promote the implementation of specially designed applications for different sectors in Africa, the Bootcamp, officially launched on 5 September, has taken on a host of African developers.

The Bootcamp also features virtual weekend conferences on many use-cases for blockchain. These conferences are aimed at encouraging creative and comprehensive discussions on the implementation of blockchain technology in Africa, including platform presentations by businesses and panel sessions on many Blockchain issues. The first meeting, focusing on Blockchain in Finance, took place on September 5. It featured a keynote speech given by Professor Anicia Peters, University of Namibia Pro-Vice Chancellor for Science, Innovation and Development.

The next conference, scheduled to take place on October 3rd, will focus on the theme: Blockchain in Health. The keynote speech will be given by Arnab Paul, President of the Kolkata Chapter in India. Several organizations and startups will also give platform presentations via their representatives based on medical use cases for blockchain technology.

Read More >>

Continue Reading

Articles

Pan-African incubator, MEST Africa to invest $700k in its Class of 2020

Izunna Okpala

Published

on

After graduating its 2020 class a month ago, MEST Africa, the Pan-African incubator, revealed that it will invest $100k each in seven start-ups.

In the final pitching event, fifteen teams participated, but only seven were picked, adding $700k to the overall funding round for this class.

In addition to the funding, start-ups can also participate in the ongoing incubation programme at MEST.

Ashwin Ravichandran, managing director of MEST Africa, had this to say, speaking about the range.

This was the first time that our entrepreneurs were educated online and away from our campus for the most part, and I’m proud to say they came out stronger than ever. We are incredibly excited about the 7 companies in which we have chosen to invest and look forward to continuing our support and mentorship as they start their companies across the continent.

The seven startups include:

Shopa, a last-mile ordering and delivery startup for informal retailers in Africa.

Heny, a forum to explore new flavors and perspectives for young professionals by changing the way diners eat.

Boxconn, a network that offers companies and individuals access to a list of confirmed distribution partners nearest to them, instantly delivers packages from one location to another.

KPILens, a start-up that automates the documentation, tracking and assessment process for development organizations and makes it simpler.

Tendo, an online network that ties independent resellers to companies, aims to tackle two of the biggest issues facing Africa: unemployment and gender inequality.

Joovlin, helps fintech platforms and merchants increase their sales by interconnecting mobile wallet users and allowing them to transact with each other.

Eleka, streamlines onboarding procedures for customers that are fraught with lengthy and expensive labor measures.

Move In Rentals, Credia, Uyolo, VendoorPro, Trastea, Cornerstock, Colibri, and JidiTrust are the other startups that didn’t receive funding from MEST.

Last year, MEST invested $100k in eleven startups while celebrating its 11th year in business. The sum, $1.1 million, represented the organization’s largest single cohort investment.

And although the amount this year is $400k smaller, the selected startups will have the opportunity to enter the incubator ‘s portfolio of over 40 startups across Ghana, Kenya, Nigeria, and South Africa.

Continue Reading

Trending