Today while I was scrolling my “WordPress reader” page I saw a post which reminds me to something I had fun before /and find it useful in many situations/:
As simple as that!
A fast and straight approach for creating local IPS repository.
Most people are relating direct path reads with an algorithm which is just controlling the way our read is performed. But actually in Exadata environment this is the algorithm which is balancing the load between the Compute and the Storage nodes. Something really important.
As usual, the algorithm is not perfect and for some situations we shall be able to control it.
In here we’ll cover the flowing parameters:
Each parameter have its own “charm” and reason to be here.
If you try to find out what is HCC and how it works you could start reading the documentation, then some books, blog posts and at the end you will have to put all together. In this post I’ll do exactly this. Put all together. Starting with the basic and going through the internals with examples.
HCC stands for Hybrid Columnar Compression. This is the kick ass compression of Oracle Database, which can provide you up to 15x compression ratio. It can be used only on SPARC SuperCLuster,Exadata,Pillar Axiom and ZFSSA.
Well, that was the marketing part. Now let see how it works.
One of the biggest advantages of Exadata are the Storage Indexes. Unfortunately Oracle is not providing us a lot of information about them and if you dig a bit in the net, you will find a lot creepy stories on how they are supposed to work. So instead of making yet another story, I’ll demonstrate everything with examples.
Oracle is trying to make everything automated, in general they are trying to reduce the administration. Nothing bad with that, but as we know the automated features in Oracle Database are good in most, but not in all situations. In those situations where the algorithms are not handling well, a human interaction is needed.
The same is the case of Storage Indexes. They are transparent for the database, you cannot control them in documented way and they are fully automated. In general they are working well, but in some cases for some objects they are created for the wrong column set. Some objects don’t sound seriously. But what if those objects are in the core of your system, what would happen if the performance of the base reports is based exactly on those particular storage indexes. Well that’s bad because you cannot control them. You can just restart the cell nodes and hope… hope that the “Exadata” will create them in the right way for you. Well, there is another way.
Exadata X4 – Smart Flash Cache Compression – it’s Smart its Flash it’s Compression… So it shall be great, no?
If you take a look at the Datasheet about this new feature you’ll notice something interesting: “zero performance overhead for compression and decompression”. Wow, now its even better! Smart, Flash, Compression, Zero Overhead!
Lets see what more we can find about it: