Discussion:
Storage choice for Oracle database on VMware
Radoulov, Dimitre
2018-10-30 17:14:08 UTC
Permalink
Hi all,

considering the technology today: XFS as default FS on Linux 7, the modern
HW and the latest VMware versions, what storage type would you use for a
single database instance on VMware?
Oracle versions are from 11.2 to 12.2, database size varies from 300G to
3T+.

Would you use XFS with non-default values for filesystemio_options or ASM?

Regards
Dimitre
Chris Taylor
2018-10-30 17:29:39 UTC
Permalink
After dealing with multiple filesystems of very busy databases for the last
year (one of which is 70+TB), I would say ASM ALL THE WAY. Never go back
to filesystems after using ASM.

Adding space, managing space is so much better on ASM.

Just my $0.02.

Chris
Post by Radoulov, Dimitre
Hi all,
considering the technology today: XFS as default FS on Linux 7, the modern
HW and the latest VMware versions, what storage type would you use for a
single database instance on VMware?
Oracle versions are from 11.2 to 12.2, database size varies from 300G to
3T+.
Would you use XFS with non-default values for filesystemio_options or ASM?
Regards
Dimitre
Matthew Parker
2018-10-30 17:36:15 UTC
Permalink
It is always a choice.

If you decide to use filesystems then XFS, although for some reasons based on your storage team you may only have NFS available.

ASM still holds a performance advantage over XFS, but not by much.

Extra GRID Infrastructure SW with patching for ASM versus simple XFS file systems.



Matthew Parker

Chief Technologist

Dimensional DBA

Oracle Gold Partner

425-891-7934 (cell)

D&B 047931344

CAGE 7J5S7

***@comcast.net <mailto:***@comcast.net>

<http://www.linkedin.com/pub/matthew-parker/6/51b/944/> View Matthew Parker's profile on LinkedIn

www.dimensionaldba.com <http://www.dimensionaldba.com/>





From: oracle-l-***@freelists.org <oracle-l-***@freelists.org> On Behalf Of Chris Taylor
Sent: Tuesday, October 30, 2018 10:30 AM
To: ***@gmail.com
Cc: ORACLE-L <oracle-***@freelists.org>
Subject: Re: Storage choice for Oracle database on VMware



After dealing with multiple filesystems of very busy databases for the last year (one of which is 70+TB), I would say ASM ALL THE WAY. Never go back to filesystems after using ASM.



Adding space, managing space is so much better on ASM.



Just my $0.02.



Chris



On Tue, Oct 30, 2018 at 12:15 PM Radoulov, Dimitre <***@gmail.com <mailto:***@gmail.com> > wrote:

Hi all,



considering the technology today: XFS as default FS on Linux 7, the modern HW and the latest VMware versions, what storage type would you use for a single database instance on VMware?

Oracle versions are from 11.2 to 12.2, database size varies from 300G to 3T+.



Would you use XFS with non-default values for filesystemio_options or ASM?



Regards

Dimitre
n***@gmail.com
2018-10-30 18:06:19 UTC
Permalink
ASM also requires 8GB ram in 18. In a vm environment that's a pretty big
overhead. Not a reason not to do it, but a consideration. You'll also want
to think about what your admins are used to and what vmware refer to as day
2 operations. Disk add, FS expansion, backup etc. There are some specifics
for vmware for databases that you may find your vm admins don't immediately
buy into.
Post by Matthew Parker
It is always a choice.
If you decide to use filesystems then XFS, although for some reasons based
on your storage team you may only have NFS available.
ASM still holds a performance advantage over XFS, but not by much.
Extra GRID Infrastructure SW with patching for ASM versus simple XFS file systems.
*Matthew Parker*
*Chief Technologist*
*Dimensional DBA*
*Oracle Gold Partner*
*425-891-7934 (cell)*
*D&B *047931344
*CAGE *7J5S7
*View Matthew Parker's profile on LinkedIn*
<http://www.linkedin.com/pub/matthew-parker/6/51b/944/>
www.dimensionaldba.com
Behalf Of *Chris Taylor
*Sent:* Tuesday, October 30, 2018 10:30 AM
*Subject:* Re: Storage choice for Oracle database on VMware
After dealing with multiple filesystems of very busy databases for the
last year (one of which is 70+TB), I would say ASM ALL THE WAY. Never go
back to filesystems after using ASM.
Adding space, managing space is so much better on ASM.
Just my $0.02.
Chris
Hi all,
considering the technology today: XFS as default FS on Linux 7, the modern
HW and the latest VMware versions, what storage type would you use for a
single database instance on VMware?
Oracle versions are from 11.2 to 12.2, database size varies from 300G to 3T+.
Would you use XFS with non-default values for filesystemio_options or ASM?
Regards
Dimitre
Radoulov, Dimitre
2018-10-30 18:12:05 UTC
Permalink
Thank you Chris, Matthew and Niall,

so the question is if performancewise ASM is worth it.

With the default Oracle database settings the I/O on XFS would be
synchronous, right?

And if I understand correctly Note 1987437.1, on Linux you cannot enable
async I/O without turning on direct I/O too.

Regards
Dimitre
Post by n***@gmail.com
ASM also requires 8GB ram in 18. In a vm environment that's a pretty big
overhead. Not a reason not to do it, but a consideration. You'll also want
to think about what your admins are used to and what vmware refer to as day
2 operations. Disk add, FS expansion, backup etc. There are some specifics
for vmware for databases that you may find your vm admins don't immediately
buy into.
Post by Matthew Parker
It is always a choice.
If you decide to use filesystems then XFS, although for some reasons
based on your storage team you may only have NFS available.
ASM still holds a performance advantage over XFS, but not by much.
Extra GRID Infrastructure SW with patching for ASM versus simple XFS file systems.
*Matthew Parker*
*Chief Technologist*
*Dimensional DBA*
*Oracle Gold Partner*
*425-891-7934 (cell)*
*D&B *047931344
*CAGE *7J5S7
*View Matthew Parker's profile on LinkedIn*
<http://www.linkedin.com/pub/matthew-parker/6/51b/944/>
www.dimensionaldba.com
Behalf Of *Chris Taylor
*Sent:* Tuesday, October 30, 2018 10:30 AM
*Subject:* Re: Storage choice for Oracle database on VMware
After dealing with multiple filesystems of very busy databases for the
last year (one of which is 70+TB), I would say ASM ALL THE WAY. Never go
back to filesystems after using ASM.
Adding space, managing space is so much better on ASM.
Just my $0.02.
Chris
Hi all,
considering the technology today: XFS as default FS on Linux 7, the
modern HW and the latest VMware versions, what storage type would you use
for a single database instance on VMware?
Oracle versions are from 11.2 to 12.2, database size varies from 300G to 3T+.
Would you use XFS with non-default values for filesystemio_options or ASM?
Regards
Dimitre
Stefan Koehler
2018-10-30 20:20:58 UTC
Permalink
Hello Dimitre,
what is the problem with direct I/O? You should never run an Oracle database through page cache anyway :)

I would go with tweaked XFS (e.g. "nobarrier" as this information is usually not passed through correctly with VMDKs on VMFS, etc.) if it is just one single instance in this VM.

Best Regards
Stefan Koehler

Independent Oracle performance consultant and researcher
Website: http://www.soocs.de
Thank you Chris, Matthew and Niall, 
 
so the question is if performancewise ASM is worth it. 
 
With the default Oracle database settings the I/O on XFS would be synchronous, right? 
 
And if I understand correctly Note 1987437.1, on Linux you cannot enable async I/O without turning on direct I/O too.
 
Regards 
Dimitre
--
http://www.freelists.org/webpage/oracle-l
Leng
2018-10-30 21:33:05 UTC
Permalink
Asm is great when you plan correctly. If you don’t it’s very painful. Eg. If you have different sized disks asm will be forever rebalancing, and failing as there is not enough space on the odd disk. So you need to vacate the diskgroup to rebuild it. (Yes, you know... not my fault, the previous consultant did it...) If there’s an asm bug you may have to take an outage on the Asm to apply the patch.

Normal disk operations like dd to asm is almost impossible. Trying to find that corrupted data block on the asm disk takes great asm expertise from a great oracle support engineer.

Those were some up of my worst asm nightmares. It was only 2 years ago. I have since moved on...

Cheers,
Leng
Post by Stefan Koehler
Hello Dimitre,
what is the problem with direct I/O? You should never run an Oracle database through page cache anyway :)
I would go with tweaked XFS (e.g. "nobarrier" as this information is usually not passed through correctly with VMDKs on VMFS, etc.) if it is just one single instance in this VM.
Best Regards
Stefan Koehler
Independent Oracle performance consultant and researcher
Website: http://www.soocs.de
Post by Radoulov, Dimitre
Thank you Chris, Matthew and Niall,
so the question is if performancewise ASM is worth it.
With the default Oracle database settings the I/O on XFS would be synchronous, right?
And if I understand correctly Note 1987437.1, on Linux you cannot enable async I/O without turning on direct I/O too.
Regards
Dimitre
--
http://www.freelists.org/webpage/oracle-l
--
http://www.freelists.org/webpage/oracle-l
Andrew Kerber
2018-10-30 21:38:18 UTC
Permalink
Most places with growing databases and heavy duty environments on vmware
use ASM. Some use XFS or similar and LVM, though I am not fond of those.
Asm is great when you plan correctly. If you don’t it’s very painful. Eg.
If you have different sized disks asm will be forever rebalancing, and
failing as there is not enough space on the odd disk. So you need to vacate
the diskgroup to rebuild it. (Yes, you know... not my fault, the previous
consultant did it...) If there’s an asm bug you may have to take an outage
on the Asm to apply the patch.
Normal disk operations like dd to asm is almost impossible. Trying to find
that corrupted data block on the asm disk takes great asm expertise from a
great oracle support engineer.
Those were some up of my worst asm nightmares. It was only 2 years ago. I
have since moved on...
Cheers,
Leng
Post by Stefan Koehler
Hello Dimitre,
what is the problem with direct I/O? You should never run an Oracle
database through page cache anyway :)
Post by Stefan Koehler
I would go with tweaked XFS (e.g. "nobarrier" as this information is
usually not passed through correctly with VMDKs on VMFS, etc.) if it is
just one single instance in this VM.
Post by Stefan Koehler
Best Regards
Stefan Koehler
Independent Oracle performance consultant and researcher
Website: http://www.soocs.de
Post by Radoulov, Dimitre
Thank you Chris, Matthew and Niall,
so the question is if performancewise ASM is worth it.
With the default Oracle database settings the I/O on XFS would be
synchronous, right?
Post by Stefan Koehler
Post by Radoulov, Dimitre
And if I understand correctly Note 1987437.1, on Linux you cannot
enable async I/O without turning on direct I/O too.
Post by Stefan Koehler
Post by Radoulov, Dimitre
Regards
Dimitre
--
http://www.freelists.org/webpage/oracle-l
--
http://www.freelists.org/webpage/oracle-l
--
Andrew W. Kerber

'If at first you dont succeed, dont take up skydiving.'
Radoulov, Dimitre
2018-10-31 07:20:58 UTC
Permalink
Thank you all for the valuable input!
Post by Stefan Koehler
what is the problem with direct I/O? You should never run an Oracle
database through page cache anyway :)

I'm not sure if direct I/O is always the best choice. I think that certain
workloads may benefit from the FS cache.

Anyway, I'm wondering why setall is still not the default value for
filesystemio_options on Linux (most probably because of the bugs with
certain filesystems and kernel versions).



Regards
Dimitre
Post by Stefan Koehler
Most places with growing databases and heavy duty environments on vmware
use ASM. Some use XFS or similar and LVM, though I am not fond of those.
Asm is great when you plan correctly. If you don’t it’s very painful. Eg.
If you have different sized disks asm will be forever rebalancing, and
failing as there is not enough space on the odd disk. So you need to vacate
the diskgroup to rebuild it. (Yes, you know... not my fault, the previous
consultant did it...) If there’s an asm bug you may have to take an outage
on the Asm to apply the patch.
Normal disk operations like dd to asm is almost impossible. Trying to
find that corrupted data block on the asm disk takes great asm expertise
from a great oracle support engineer.
Those were some up of my worst asm nightmares. It was only 2 years ago. I
have since moved on...
Cheers,
Leng
Post by Stefan Koehler
Hello Dimitre,
what is the problem with direct I/O? You should never run an Oracle
database through page cache anyway :)
Post by Stefan Koehler
I would go with tweaked XFS (e.g. "nobarrier" as this information is
usually not passed through correctly with VMDKs on VMFS, etc.) if it is
just one single instance in this VM.
Post by Stefan Koehler
Best Regards
Stefan Koehler
Independent Oracle performance consultant and researcher
Website: http://www.soocs.de
Post by Radoulov, Dimitre
Thank you Chris, Matthew and Niall,
so the question is if performancewise ASM is worth it.
With the default Oracle database settings the I/O on XFS would be
synchronous, right?
Post by Stefan Koehler
Post by Radoulov, Dimitre
And if I understand correctly Note 1987437.1, on Linux you cannot
enable async I/O without turning on direct I/O too.
Post by Stefan Koehler
Post by Radoulov, Dimitre
Regards
Dimitre
--
http://www.freelists.org/webpage/oracle-l
--
http://www.freelists.org/webpage/oracle-l
--
Andrew W. Kerber
'If at first you dont succeed, dont take up skydiving.'
Neil Chandler
2018-10-31 11:29:05 UTC
Permalink
Radoulov,

The caching in the SGA understands your data usage patterns through the LRU algorithms and will have cached all of the best data. The FS cache, if you dump it out, will look a lot more like white noise with few discernable patterns. The SAN cache even more so. The more single block reads you have, the more like white noise it all looks. The liklihood of there being a cache hit in the FS or SAN cache is relatively low. The advantage of direct path reads significantly outweights the advantage of both of those caches. It is worth noting in that on most SAN caches, if you specify that the LUN is for a database it will disable read-ahead to pre-populate the cache as it understands that it is not the best use of the cache (the general rule is that SAN cache should be reserved exclusively for writes when the SAN is used for the database.)

Note that these statements are generalisation, and that there may be cases where your assertion is true but they will be an edge case and I would recommend that you have a provable scenario to justify running in that configuration.

Neil Chandler
Database Guy.
________________________________
From: oracle-l-***@freelists.org <oracle-l-***@freelists.org> on behalf of Radoulov, Dimitre <***@gmail.com>
Sent: 31 October 2018 07:20
To: Andrew Kerber
Cc: ***@gmail.com; ***@soocs.de; Oracle-L Group
Subject: Re: Storage choice for Oracle database on VMware

Thank you all for the valuable input!
Post by Stefan Koehler
what is the problem with direct I/O? You should never run an Oracle database through page cache anyway :)
I'm not sure if direct I/O is always the best choice. I think that certain workloads may benefit from the FS cache.

Anyway, I'm wondering why setall is still not the default value for filesystemio_options on Linux (most probably because of the bugs with certain filesystems and kernel versions).



Regards
Dimitre


Il giorno mar 30 ott 2018, 22:38 Andrew Kerber <***@gmail.com<mailto:***@gmail.com>> ha scritto:
Most places with growing databases and heavy duty environments on vmware use ASM. Some use XFS or similar and LVM, though I am not fond of those.

On Tue, Oct 30, 2018 at 4:34 PM Leng <***@gmail.com<mailto:***@gmail.com>> wrote:
Asm is great when you plan correctly. If you don’t it’s very painful. Eg. If you have different sized disks asm will be forever rebalancing, and failing as there is not enough space on the odd disk. So you need to vacate the diskgroup to rebuild it. (Yes, you know... not my fault, the previous consultant did it...) If there’s an asm bug you may have to take an outage on the Asm to apply the patch.

Normal disk operations like dd to asm is almost impossible. Trying to find that corrupted data block on the asm disk takes great asm expertise from a great oracle support engineer.

Those were some up of my worst asm nightmares. It was only 2 years ago. I have since moved on...

Cheers,
Leng
Post by Stefan Koehler
Hello Dimitre,
what is the problem with direct I/O? You should never run an Oracle database through page cache anyway :)
I would go with tweaked XFS (e.g. "nobarrier" as this information is usually not passed through correctly with VMDKs on VMFS, etc.) if it is just one single instance in this VM.
Best Regards
Stefan Koehler
Independent Oracle performance consultant and researcher
Website: http://www.soocs.de
Post by Radoulov, Dimitre
Thank you Chris, Matthew and Niall,
so the question is if performancewise ASM is worth it.
With the default Oracle database settings the I/O on XFS would be synchronous, right?
And if I understand correctly Note 1987437.1, on Linux you cannot enable async I/O without turning on direct I/O too.
Regards
Dimitre
--
http://www.freelists.org/webpage/oracle-l
--
http://www.freelists.org/webpage/oracle-l




--
Andrew W. Kerber

'If at first you dont succeed, dont take up skydiving.'
Radoulov, Dimitre
2018-11-09 14:45:09 UTC
Permalink
Hello all,


after a few quick tests on XFS and ASM (calibrate_io and swingbench) I
see that direct and asynchronous I/O definitely make a difference.

Stefan and Neil, thank you for your suggestions!



Regards

Dimitre
Post by Matthew Parker
Radoulov,
The caching in the SGA understands your data usage patterns through
the LRU algorithms and will have cached all of the best data. The FS
cache, if you dump it out, will look a lot more like white noise with
few discernable patterns. The SAN cache even more so. The more single
block reads you have, the more like white noise it all looks. The
liklihood of there being a cache hit in the FS or SAN cache is
relatively low. The advantage of direct path reads significantly
outweights the advantage of both of those caches. It is worth noting
in that on most SAN caches, if you specify that the LUN is for a
database it will disable read-ahead to pre-populate the cache as it
understands that it is not the best use of the cache (the general rule
is that SAN cache should be reserved exclusively for writes when the
SAN is used for the database.)
Note that these statements are  generalisation, and that there may be
cases where your assertion is true but they will be an edge case and I
would recommend that you have a provable scenario to justify running
in that configuration.
Neil Chandler
Database Guy.
------------------------------------------------------------------------
*Sent:* 31 October 2018 07:20
*To:* Andrew Kerber
*Subject:* Re: Storage choice for Oracle database on VMware
Thank you all for the valuable input!
Post by Stefan Koehler
what is the problem with direct I/O? You should never run an Oracle database through page cache anyway :)
I'm not sure if direct I/O is always the best choice. I think that
certain workloads may benefit from the FS cache.
Anyway, I'm wondering why setall is still not the default value for
filesystemio_options on Linux (most probably because of the bugs with
certain filesystems and kernel versions).
Regards
Dimitre
Il giorno mar 30 ott 2018, 22:38 Andrew Kerber
Most places with growing databases and heavy duty environments on
vmware use ASM.  Some use XFS or similar and LVM, though I am not
fond of those.
Asm is great when you plan correctly. If you don’t it’s very
painful. Eg. If you have different sized disks asm will be
forever rebalancing, and failing as there is not enough space
on the odd disk. So you need to vacate the diskgroup to
rebuild it. (Yes, you know... not my fault, the previous
consultant did it...) If there’s an asm bug you may have to
take an outage on the Asm to apply the patch.
Normal disk operations like dd to asm is almost impossible.
Trying to find that corrupted data block on the asm disk takes
great asm expertise from a great oracle support engineer.
Those were some up of my worst asm nightmares. It was only 2
years ago. I have since moved on...
Cheers,
Leng
Post by Stefan Koehler
Hello Dimitre,
what is the problem with direct I/O? You should never run an
Oracle database through page cache anyway :)
Post by Stefan Koehler
I would go with tweaked XFS (e.g. "nobarrier" as this
information is usually not passed through correctly with VMDKs
on VMFS, etc.) if it is just one single instance in this VM.
Post by Stefan Koehler
Best Regards
Stefan Koehler
Independent Oracle performance consultant and researcher
Website: http://www.soocs.de <http://www.soocs.de>
Post by Radoulov, Dimitre
Thank you Chris, Matthew and Niall,
so the question is if performancewise ASM is worth it.
With the default Oracle database settings the I/O on XFS
would be synchronous, right?
Post by Stefan Koehler
Post by Radoulov, Dimitre
And if I understand correctly Note 1987437.1, on Linux you
cannot enable async I/O without turning on direct I/O too.
Post by Stefan Koehler
Post by Radoulov, Dimitre
Regards
Dimitre
--
http://www.freelists.org/webpage/oracle-l
<http://www.freelists.org/webpage/oracle-l>
--
http://www.freelists.org/webpage/oracle-l
--
Andrew W. Kerber
'If at first you dont succeed, dont take up skydiving.'
n***@gmail.com
2018-11-09 18:31:01 UTC
Permalink
Dimitre

If you wish to test relative io performance for Oracle then SLOB is the way
to go. Also applies for CPU, albeit lio vs pio.
Post by Radoulov, Dimitre
Hello all,
after a few quick tests on XFS and ASM (calibrate_io and swingbench) I see
that direct and asynchronous I/O definitely make a difference.
Stefan and Neil, thank you for your suggestions!
Regards
Dimitre
Radoulov,
The caching in the SGA understands your data usage patterns through the
LRU algorithms and will have cached all of the best data. The FS cache, if
you dump it out, will look a lot more like white noise with few discernable
patterns. The SAN cache even more so. The more single block reads you have,
the more like white noise it all looks. The liklihood of there being a
cache hit in the FS or SAN cache is relatively low. The advantage of direct
path reads significantly outweights the advantage of both of those caches.
It is worth noting in that on most SAN caches, if you specify that the LUN
is for a database it will disable read-ahead to pre-populate the cache as
it understands that it is not the best use of the cache (the general rule
is that SAN cache should be reserved exclusively for writes when the SAN is
used for the database.)
Note that these statements are generalisation, and that there may be
cases where your assertion is true but they will be an edge case and I
would recommend that you have a provable scenario to justify running in
that configuration.
Neil Chandler
Database Guy.
------------------------------
*Sent:* 31 October 2018 07:20
*To:* Andrew Kerber
*Subject:* Re: Storage choice for Oracle database on VMware
Thank you all for the valuable input!
Post by Stefan Koehler
what is the problem with direct I/O? You should never run an Oracle
database through page cache anyway :)
I'm not sure if direct I/O is always the best choice. I think that certain
workloads may benefit from the FS cache.
Anyway, I'm wondering why setall is still not the default value for
filesystemio_options on Linux (most probably because of the bugs with
certain filesystems and kernel versions).
Regards
Dimitre
Most places with growing databases and heavy duty environments on vmware
use ASM. Some use XFS or similar and LVM, though I am not fond of those.
Asm is great when you plan correctly. If you don’t it’s very painful. Eg.
If you have different sized disks asm will be forever rebalancing, and
failing as there is not enough space on the odd disk. So you need to vacate
the diskgroup to rebuild it. (Yes, you know... not my fault, the previous
consultant did it...) If there’s an asm bug you may have to take an outage
on the Asm to apply the patch.
Normal disk operations like dd to asm is almost impossible. Trying to find
that corrupted data block on the asm disk takes great asm expertise from a
great oracle support engineer.
Those were some up of my worst asm nightmares. It was only 2 years ago. I
have since moved on...
Cheers,
Leng
Post by Stefan Koehler
Hello Dimitre,
what is the problem with direct I/O? You should never run an Oracle
database through page cache anyway :)
Post by Stefan Koehler
I would go with tweaked XFS (e.g. "nobarrier" as this information is
usually not passed through correctly with VMDKs on VMFS, etc.) if it is
just one single instance in this VM.
Post by Stefan Koehler
Best Regards
Stefan Koehler
Independent Oracle performance consultant and researcher
Website: http://www.soocs.de
Post by Radoulov, Dimitre
Thank you Chris, Matthew and Niall,
so the question is if performancewise ASM is worth it.
With the default Oracle database settings the I/O on XFS would be
synchronous, right?
Post by Stefan Koehler
Post by Radoulov, Dimitre
And if I understand correctly Note 1987437.1, on Linux you cannot
enable async I/O without turning on direct I/O too.
Post by Stefan Koehler
Post by Radoulov, Dimitre
Regards
Dimitre
--
http://www.freelists.org/webpage/oracle-l
--
http://www.freelists.org/webpage/oracle-l
--
Andrew W. Kerber
'If at first you dont succeed, dont take up skydiving.'
Radoulov, Dimitre
2018-11-09 18:38:51 UTC
Permalink
Thanks Niall,
I will try SLOB too, I used Swingbench because it's very easy to set up.


Regards
Dimitre
Post by Radoulov, Dimitre
Dimitre
If you wish to test relative io performance for Oracle then SLOB is the
way to go. Also applies for CPU, albeit lio vs pio.
Post by Radoulov, Dimitre
Hello all,
after a few quick tests on XFS and ASM (calibrate_io and swingbench) I
see that direct and asynchronous I/O definitely make a difference.
Stefan and Neil, thank you for your suggestions!
Regards
Dimitre
Radoulov,
The caching in the SGA understands your data usage patterns through the
LRU algorithms and will have cached all of the best data. The FS cache, if
you dump it out, will look a lot more like white noise with few discernable
patterns. The SAN cache even more so. The more single block reads you have,
the more like white noise it all looks. The liklihood of there being a
cache hit in the FS or SAN cache is relatively low. The advantage of direct
path reads significantly outweights the advantage of both of those caches.
It is worth noting in that on most SAN caches, if you specify that the LUN
is for a database it will disable read-ahead to pre-populate the cache as
it understands that it is not the best use of the cache (the general rule
is that SAN cache should be reserved exclusively for writes when the SAN is
used for the database.)
Note that these statements are generalisation, and that there may be
cases where your assertion is true but they will be an edge case and I
would recommend that you have a provable scenario to justify running in
that configuration.
Neil Chandler
Database Guy.
------------------------------
*Sent:* 31 October 2018 07:20
*To:* Andrew Kerber
*Subject:* Re: Storage choice for Oracle database on VMware
Thank you all for the valuable input!
Post by Stefan Koehler
what is the problem with direct I/O? You should never run an Oracle
database through page cache anyway :)
I'm not sure if direct I/O is always the best choice. I think that
certain workloads may benefit from the FS cache.
Anyway, I'm wondering why setall is still not the default value for
filesystemio_options on Linux (most probably because of the bugs with
certain filesystems and kernel versions).
Regards
Dimitre
Most places with growing databases and heavy duty environments on vmware
use ASM. Some use XFS or similar and LVM, though I am not fond of those.
Asm is great when you plan correctly. If you don’t it’s very painful. Eg.
If you have different sized disks asm will be forever rebalancing, and
failing as there is not enough space on the odd disk. So you need to vacate
the diskgroup to rebuild it. (Yes, you know... not my fault, the previous
consultant did it...) If there’s an asm bug you may have to take an outage
on the Asm to apply the patch.
Normal disk operations like dd to asm is almost impossible. Trying to
find that corrupted data block on the asm disk takes great asm expertise
from a great oracle support engineer.
Those were some up of my worst asm nightmares. It was only 2 years ago. I
have since moved on...
Cheers,
Leng
Post by Stefan Koehler
Hello Dimitre,
what is the problem with direct I/O? You should never run an Oracle
database through page cache anyway :)
Post by Stefan Koehler
I would go with tweaked XFS (e.g. "nobarrier" as this information is
usually not passed through correctly with VMDKs on VMFS, etc.) if it is
just one single instance in this VM.
Post by Stefan Koehler
Best Regards
Stefan Koehler
Independent Oracle performance consultant and researcher
Website: http://www.soocs.de
Post by Radoulov, Dimitre
Thank you Chris, Matthew and Niall,
so the question is if performancewise ASM is worth it.
With the default Oracle database settings the I/O on XFS would be
synchronous, right?
Post by Stefan Koehler
Post by Radoulov, Dimitre
And if I understand correctly Note 1987437.1, on Linux you cannot
enable async I/O without turning on direct I/O too.
Post by Stefan Koehler
Post by Radoulov, Dimitre
Regards
Dimitre
--
http://www.freelists.org/webpage/oracle-l
--
http://www.freelists.org/webpage/oracle-l
--
Andrew W. Kerber
'If at first you dont succeed, dont take up skydiving.'
n***@gmail.com
2018-11-09 19:03:34 UTC
Permalink
Checkout Kevin's material and https://flashdba.com/slob/ for wrapper
scripts. Slob is pretty easy to setup.
Post by Radoulov, Dimitre
Thanks Niall,
I will try SLOB too, I used Swingbench because it's very easy to set up.
Regards
Dimitre
Post by Radoulov, Dimitre
Dimitre
If you wish to test relative io performance for Oracle then SLOB is the
way to go. Also applies for CPU, albeit lio vs pio.
Post by Radoulov, Dimitre
Hello all,
after a few quick tests on XFS and ASM (calibrate_io and swingbench) I
see that direct and asynchronous I/O definitely make a difference.
Stefan and Neil, thank you for your suggestions!
Regards
Dimitre
Radoulov,
The caching in the SGA understands your data usage patterns through the
LRU algorithms and will have cached all of the best data. The FS cache, if
you dump it out, will look a lot more like white noise with few discernable
patterns. The SAN cache even more so. The more single block reads you have,
the more like white noise it all looks. The liklihood of there being a
cache hit in the FS or SAN cache is relatively low. The advantage of direct
path reads significantly outweights the advantage of both of those caches.
It is worth noting in that on most SAN caches, if you specify that the LUN
is for a database it will disable read-ahead to pre-populate the cache as
it understands that it is not the best use of the cache (the general rule
is that SAN cache should be reserved exclusively for writes when the SAN is
used for the database.)
Note that these statements are generalisation, and that there may be
cases where your assertion is true but they will be an edge case and I
would recommend that you have a provable scenario to justify running in
that configuration.
Neil Chandler
Database Guy.
------------------------------
*Sent:* 31 October 2018 07:20
*To:* Andrew Kerber
*Subject:* Re: Storage choice for Oracle database on VMware
Thank you all for the valuable input!
Post by Stefan Koehler
what is the problem with direct I/O? You should never run an Oracle
database through page cache anyway :)
I'm not sure if direct I/O is always the best choice. I think that
certain workloads may benefit from the FS cache.
Anyway, I'm wondering why setall is still not the default value for
filesystemio_options on Linux (most probably because of the bugs with
certain filesystems and kernel versions).
Regards
Dimitre
Most places with growing databases and heavy duty environments on vmware
use ASM. Some use XFS or similar and LVM, though I am not fond of those.
Asm is great when you plan correctly. If you don’t it’s very painful.
Eg. If you have different sized disks asm will be forever rebalancing, and
failing as there is not enough space on the odd disk. So you need to vacate
the diskgroup to rebuild it. (Yes, you know... not my fault, the previous
consultant did it...) If there’s an asm bug you may have to take an outage
on the Asm to apply the patch.
Normal disk operations like dd to asm is almost impossible. Trying to
find that corrupted data block on the asm disk takes great asm expertise
from a great oracle support engineer.
Those were some up of my worst asm nightmares. It was only 2 years ago.
I have since moved on...
Cheers,
Leng
Post by Stefan Koehler
Hello Dimitre,
what is the problem with direct I/O? You should never run an Oracle
database through page cache anyway :)
Post by Stefan Koehler
I would go with tweaked XFS (e.g. "nobarrier" as this information is
usually not passed through correctly with VMDKs on VMFS, etc.) if it is
just one single instance in this VM.
Post by Stefan Koehler
Best Regards
Stefan Koehler
Independent Oracle performance consultant and researcher
Website: http://www.soocs.de
Post by Radoulov, Dimitre
Thank you Chris, Matthew and Niall,
so the question is if performancewise ASM is worth it.
With the default Oracle database settings the I/O on XFS would be
synchronous, right?
Post by Stefan Koehler
Post by Radoulov, Dimitre
And if I understand correctly Note 1987437.1, on Linux you cannot
enable async I/O without turning on direct I/O too.
Post by Stefan Koehler
Post by Radoulov, Dimitre
Regards
Dimitre
--
http://www.freelists.org/webpage/oracle-l
--
http://www.freelists.org/webpage/oracle-l
--
Andrew W. Kerber
'If at first you dont succeed, dont take up skydiving.'
Mladen Gogala
2018-11-10 16:40:42 UTC
Permalink
SLOB is also very easy to set up.
Post by Radoulov, Dimitre
Thanks Niall,
I will try SLOB too, I used Swingbench because it's very easy to set up.
Regards
Dimitre
Dimitre
If you wish to test relative io performance for Oracle then SLOB
is the way to go. Also applies for CPU, albeit lio vs pio.
Hello all,
after a few quick tests on XFS and ASM (calibrate_io and
swingbench) I see that direct and asynchronous I/O definitely
make a difference.
Stefan and Neil, thank you for your suggestions!
Regards
Dimitre
Post by Matthew Parker
Radoulov,
The caching in the SGA understands your data usage patterns
through the LRU algorithms and will have cached all of the
best data. The FS cache, if you dump it out, will look a lot
more like white noise with few discernable patterns. The SAN
cache even more so. The more single block reads you have, the
more like white noise it all looks. The liklihood of there
being a cache hit in the FS or SAN cache is relatively low.
The advantage of direct path reads significantly outweights
the advantage of both of those caches. It is worth noting in
that on most SAN caches, if you specify that the LUN is for a
database it will disable read-ahead to pre-populate the cache
as it understands that it is not the best use of the cache
(the general rule is that SAN cache should be reserved
exclusively for writes when the SAN is used for the database.)
Note that these statements are  generalisation, and that
there may be cases where your assertion is true but they will
be an edge case and I would recommend that you have a
provable scenario to justify running in that configuration.
Neil Chandler
Database Guy.
------------------------------------------------------------------------
*Sent:* 31 October 2018 07:20
*To:* Andrew Kerber
*Subject:* Re: Storage choice for Oracle database on VMware
Thank you all for the valuable input!
Post by Stefan Koehler
what is the problem with direct I/O? You should never run an
Oracle database through page cache anyway :)
I'm not sure if direct I/O is always the best choice. I think
that certain workloads may benefit from the FS cache.
Anyway, I'm wondering why setall is still not the default
value for filesystemio_options on Linux (most probably
because of the bugs with certain filesystems and kernel
versions).
Regards
Dimitre
Il giorno mar 30 ott 2018, 22:38 Andrew Kerber
Most places with growing databases and heavy duty
environments on vmware use ASM.  Some use XFS or similar
and LVM, though I am not fond of those.
Asm is great when you plan correctly. If you don’t
it’s very painful. Eg. If you have different sized
disks asm will be forever rebalancing, and failing as
there is not enough space on the odd disk. So you
need to vacate the diskgroup to rebuild it. (Yes, you
know... not my fault, the previous consultant did
it...) If there’s an asm bug you may have to take an
outage on the Asm to apply the patch.
Normal disk operations like dd to asm is almost
impossible. Trying to find that corrupted data block
on the asm disk takes great asm expertise from a
great oracle support engineer.
Those were some up of my worst asm nightmares. It was
only 2 years ago. I have since moved on...
Cheers,
Leng
Post by Stefan Koehler
On 31 Oct 2018, at 7:20 am, Stefan Koehler
Hello Dimitre,
what is the problem with direct I/O? You should
never run an Oracle database through page cache anyway :)
Post by Stefan Koehler
I would go with tweaked XFS (e.g. "nobarrier" as
this information is usually not passed through
correctly with VMDKs on VMFS, etc.) if it is just one
single instance in this VM.
Post by Stefan Koehler
Best Regards
Stefan Koehler
Independent Oracle performance consultant and
researcher
Post by Stefan Koehler
Website: http://www.soocs.de <http://www.soocs.de>
Post by Radoulov, Dimitre
Thank you Chris, Matthew and Niall,
so the question is if performancewise ASM is worth
it.
Post by Stefan Koehler
Post by Radoulov, Dimitre
With the default Oracle database settings the I/O
on XFS would be synchronous, right?
Post by Stefan Koehler
Post by Radoulov, Dimitre
And if I understand correctly Note 1987437.1, on
Linux you cannot enable async I/O without turning on
direct I/O too.
Post by Stefan Koehler
Post by Radoulov, Dimitre
Regards
Dimitre
--
http://www.freelists.org/webpage/oracle-l
<http://www.freelists.org/webpage/oracle-l>
--
http://www.freelists.org/webpage/oracle-l
--
Andrew W. Kerber
'If at first you dont succeed, dont take up skydiving.'
--
Mladen Gogala Database Consultant Tel: (347) 321-1217
Ls Cheng
2018-11-11 19:56:57 UTC
Permalink
Hi Radoulov

Just wondering in youtr tests did you set FILESYSTEMIO_OPTIONS to SETALL
in xfs?

Thanks

<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
Virus-free.
www.avast.com
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
<#m_-8563382580318212349_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
Post by Radoulov, Dimitre
Hello all,
after a few quick tests on XFS and ASM (calibrate_io and swingbench) I see
that direct and asynchronous I/O definitely make a difference.
Stefan and Neil, thank you for your suggestions!
Regards
Dimitre
Radoulov,
The caching in the SGA understands your data usage patterns through the
LRU algorithms and will have cached all of the best data. The FS cache, if
you dump it out, will look a lot more like white noise with few discernable
patterns. The SAN cache even more so. The more single block reads you have,
the more like white noise it all looks. The liklihood of there being a
cache hit in the FS or SAN cache is relatively low. The advantage of direct
path reads significantly outweights the advantage of both of those caches.
It is worth noting in that on most SAN caches, if you specify that the LUN
is for a database it will disable read-ahead to pre-populate the cache as
it understands that it is not the best use of the cache (the general rule
is that SAN cache should be reserved exclusively for writes when the SAN is
used for the database.)
Note that these statements are generalisation, and that there may be
cases where your assertion is true but they will be an edge case and I
would recommend that you have a provable scenario to justify running in
that configuration.
Neil Chandler
Database Guy.
------------------------------
*Sent:* 31 October 2018 07:20
*To:* Andrew Kerber
*Subject:* Re: Storage choice for Oracle database on VMware
Thank you all for the valuable input!
Post by Stefan Koehler
what is the problem with direct I/O? You should never run an Oracle
database through page cache anyway :)
I'm not sure if direct I/O is always the best choice. I think that certain
workloads may benefit from the FS cache.
Anyway, I'm wondering why setall is still not the default value for
filesystemio_options on Linux (most probably because of the bugs with
certain filesystems and kernel versions).
Regards
Dimitre
Most places with growing databases and heavy duty environments on vmware
use ASM. Some use XFS or similar and LVM, though I am not fond of those.
Asm is great when you plan correctly. If you don’t it’s very painful. Eg.
If you have different sized disks asm will be forever rebalancing, and
failing as there is not enough space on the odd disk. So you need to vacate
the diskgroup to rebuild it. (Yes, you know... not my fault, the previous
consultant did it...) If there’s an asm bug you may have to take an outage
on the Asm to apply the patch.
Normal disk operations like dd to asm is almost impossible. Trying to find
that corrupted data block on the asm disk takes great asm expertise from a
great oracle support engineer.
Those were some up of my worst asm nightmares. It was only 2 years ago. I
have since moved on...
Cheers,
Leng
Post by Stefan Koehler
Hello Dimitre,
what is the problem with direct I/O? You should never run an Oracle
database through page cache anyway :)
Post by Stefan Koehler
I would go with tweaked XFS (e.g. "nobarrier" as this information is
usually not passed through correctly with VMDKs on VMFS, etc.) if it is
just one single instance in this VM.
Post by Stefan Koehler
Best Regards
Stefan Koehler
Independent Oracle performance consultant and researcher
Website: http://www.soocs.de
Post by Radoulov, Dimitre
Thank you Chris, Matthew and Niall,
so the question is if performancewise ASM is worth it.
With the default Oracle database settings the I/O on XFS would be
synchronous, right?
Post by Stefan Koehler
Post by Radoulov, Dimitre
And if I understand correctly Note 1987437.1, on Linux you cannot
enable async I/O without turning on direct I/O too.
Post by Stefan Koehler
Post by Radoulov, Dimitre
Regards
Dimitre
--
http://www.freelists.org/webpage/oracle-l
--
http://www.freelists.org/webpage/oracle-l
--
Andrew W. Kerber
'If at first you dont succeed, dont take up skydiving.'
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
Virus-free.
www.avast.com
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
<#m_-8563382580318212349_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
Radoulov, Dimitre
2018-11-11 20:27:32 UTC
Permalink
Yes,
and in our environment it really makes a difference.


Regards
Dimitre
Post by Ls Cheng
Hi Radoulov
Just wondering in youtr tests did you set FILESYSTEMIO_OPTIONS to SETALL
in xfs?
Thanks
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail> Virus-free.
www.avast.com
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
<#m_8776224909084966304_m_-8563382580318212349_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
Post by Radoulov, Dimitre
Hello all,
after a few quick tests on XFS and ASM (calibrate_io and swingbench) I
see that direct and asynchronous I/O definitely make a difference.
Stefan and Neil, thank you for your suggestions!
Regards
Dimitre
Radoulov,
The caching in the SGA understands your data usage patterns through the
LRU algorithms and will have cached all of the best data. The FS cache, if
you dump it out, will look a lot more like white noise with few discernable
patterns. The SAN cache even more so. The more single block reads you have,
the more like white noise it all looks. The liklihood of there being a
cache hit in the FS or SAN cache is relatively low. The advantage of direct
path reads significantly outweights the advantage of both of those caches.
It is worth noting in that on most SAN caches, if you specify that the LUN
is for a database it will disable read-ahead to pre-populate the cache as
it understands that it is not the best use of the cache (the general rule
is that SAN cache should be reserved exclusively for writes when the SAN is
used for the database.)
Note that these statements are generalisation, and that there may be
cases where your assertion is true but they will be an edge case and I
would recommend that you have a provable scenario to justify running in
that configuration.
Neil Chandler
Database Guy.
------------------------------
*Sent:* 31 October 2018 07:20
*To:* Andrew Kerber
*Subject:* Re: Storage choice for Oracle database on VMware
Thank you all for the valuable input!
Post by Stefan Koehler
what is the problem with direct I/O? You should never run an Oracle
database through page cache anyway :)
I'm not sure if direct I/O is always the best choice. I think that
certain workloads may benefit from the FS cache.
Anyway, I'm wondering why setall is still not the default value for
filesystemio_options on Linux (most probably because of the bugs with
certain filesystems and kernel versions).
Regards
Dimitre
Most places with growing databases and heavy duty environments on vmware
use ASM. Some use XFS or similar and LVM, though I am not fond of those.
Asm is great when you plan correctly. If you don’t it’s very painful. Eg.
If you have different sized disks asm will be forever rebalancing, and
failing as there is not enough space on the odd disk. So you need to vacate
the diskgroup to rebuild it. (Yes, you know... not my fault, the previous
consultant did it...) If there’s an asm bug you may have to take an outage
on the Asm to apply the patch.
Normal disk operations like dd to asm is almost impossible. Trying to
find that corrupted data block on the asm disk takes great asm expertise
from a great oracle support engineer.
Those were some up of my worst asm nightmares. It was only 2 years ago. I
have since moved on...
Cheers,
Leng
Post by Stefan Koehler
Hello Dimitre,
what is the problem with direct I/O? You should never run an Oracle
database through page cache anyway :)
Post by Stefan Koehler
I would go with tweaked XFS (e.g. "nobarrier" as this information is
usually not passed through correctly with VMDKs on VMFS, etc.) if it is
just one single instance in this VM.
Post by Stefan Koehler
Best Regards
Stefan Koehler
Independent Oracle performance consultant and researcher
Website: http://www.soocs.de
Post by Radoulov, Dimitre
Thank you Chris, Matthew and Niall,
so the question is if performancewise ASM is worth it.
With the default Oracle database settings the I/O on XFS would be
synchronous, right?
Post by Stefan Koehler
Post by Radoulov, Dimitre
And if I understand correctly Note 1987437.1, on Linux you cannot
enable async I/O without turning on direct I/O too.
Post by Stefan Koehler
Post by Radoulov, Dimitre
Regards
Dimitre
--
http://www.freelists.org/webpage/oracle-l
--
http://www.freelists.org/webpage/oracle-l
--
Andrew W. Kerber
'If at first you dont succeed, dont take up skydiving.'
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail> Virus-free.
www.avast.com
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
<#m_8776224909084966304_m_-8563382580318212349_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
Ls Cheng
2018-11-11 23:47:40 UTC
Permalink
I guess for good? I am wondering because I have no experience with SETALL
in XFS, only used in ext3 and ext4 and SETALL works very good.

Thanks
Post by Radoulov, Dimitre
Yes,
and in our environment it really makes a difference.
Regards
Dimitre
Post by Ls Cheng
Hi Radoulov
Just wondering in youtr tests did you set FILESYSTEMIO_OPTIONS to SETALL
in xfs?
Thanks
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail> Virus-free.
www.avast.com
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
<#m_-5463440954123216382_m_8776224909084966304_m_-8563382580318212349_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
Post by Radoulov, Dimitre
Hello all,
after a few quick tests on XFS and ASM (calibrate_io and swingbench) I
see that direct and asynchronous I/O definitely make a difference.
Stefan and Neil, thank you for your suggestions!
Regards
Dimitre
Radoulov,
The caching in the SGA understands your data usage patterns through the
LRU algorithms and will have cached all of the best data. The FS cache, if
you dump it out, will look a lot more like white noise with few discernable
patterns. The SAN cache even more so. The more single block reads you have,
the more like white noise it all looks. The liklihood of there being a
cache hit in the FS or SAN cache is relatively low. The advantage of direct
path reads significantly outweights the advantage of both of those caches.
It is worth noting in that on most SAN caches, if you specify that the LUN
is for a database it will disable read-ahead to pre-populate the cache as
it understands that it is not the best use of the cache (the general rule
is that SAN cache should be reserved exclusively for writes when the SAN is
used for the database.)
Note that these statements are generalisation, and that there may be
cases where your assertion is true but they will be an edge case and I
would recommend that you have a provable scenario to justify running in
that configuration.
Neil Chandler
Database Guy.
------------------------------
*Sent:* 31 October 2018 07:20
*To:* Andrew Kerber
*Subject:* Re: Storage choice for Oracle database on VMware
Thank you all for the valuable input!
Post by Stefan Koehler
what is the problem with direct I/O? You should never run an Oracle
database through page cache anyway :)
I'm not sure if direct I/O is always the best choice. I think that
certain workloads may benefit from the FS cache.
Anyway, I'm wondering why setall is still not the default value for
filesystemio_options on Linux (most probably because of the bugs with
certain filesystems and kernel versions).
Regards
Dimitre
Most places with growing databases and heavy duty environments on vmware
use ASM. Some use XFS or similar and LVM, though I am not fond of those.
Asm is great when you plan correctly. If you don’t it’s very painful.
Eg. If you have different sized disks asm will be forever rebalancing, and
failing as there is not enough space on the odd disk. So you need to vacate
the diskgroup to rebuild it. (Yes, you know... not my fault, the previous
consultant did it...) If there’s an asm bug you may have to take an outage
on the Asm to apply the patch.
Normal disk operations like dd to asm is almost impossible. Trying to
find that corrupted data block on the asm disk takes great asm expertise
from a great oracle support engineer.
Those were some up of my worst asm nightmares. It was only 2 years ago.
I have since moved on...
Cheers,
Leng
Post by Stefan Koehler
Hello Dimitre,
what is the problem with direct I/O? You should never run an Oracle
database through page cache anyway :)
Post by Stefan Koehler
I would go with tweaked XFS (e.g. "nobarrier" as this information is
usually not passed through correctly with VMDKs on VMFS, etc.) if it is
just one single instance in this VM.
Post by Stefan Koehler
Best Regards
Stefan Koehler
Independent Oracle performance consultant and researcher
Website: http://www.soocs.de
Post by Radoulov, Dimitre
Thank you Chris, Matthew and Niall,
so the question is if performancewise ASM is worth it.
With the default Oracle database settings the I/O on XFS would be
synchronous, right?
Post by Stefan Koehler
Post by Radoulov, Dimitre
And if I understand correctly Note 1987437.1, on Linux you cannot
enable async I/O without turning on direct I/O too.
Post by Stefan Koehler
Post by Radoulov, Dimitre
Regards
Dimitre
--
http://www.freelists.org/webpage/oracle-l
--
http://www.freelists.org/webpage/oracle-l
--
Andrew W. Kerber
'If at first you dont succeed, dont take up skydiving.'
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail> Virus-free.
www.avast.com
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
<#m_-5463440954123216382_m_8776224909084966304_m_-8563382580318212349_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
Radoulov, Dimitre
2018-11-12 07:21:35 UTC
Permalink
Yes,
with "setall" iops and throughput are significantly higher.

I don't report the numbers because there are too many variables involved
and I have to run some additional tests.


Regards
Dimitre
Post by Ls Cheng
I guess for good? I am wondering because I have no experience with SETALL
in XFS, only used in ext3 and ext4 and SETALL works very good.
Thanks
Post by Radoulov, Dimitre
Yes,
and in our environment it really makes a difference.
Regards
Dimitre
Post by Ls Cheng
Hi Radoulov
Just wondering in youtr tests did you set FILESYSTEMIO_OPTIONS to
SETALL in xfs?
Thanks
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail> Virus-free.
www.avast.com
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
<#m_-4085493524268073496_m_-5463440954123216382_m_8776224909084966304_m_-8563382580318212349_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
Post by Radoulov, Dimitre
Hello all,
after a few quick tests on XFS and ASM (calibrate_io and swingbench) I
see that direct and asynchronous I/O definitely make a difference.
Stefan and Neil, thank you for your suggestions!
Regards
Dimitre
Radoulov,
The caching in the SGA understands your data usage patterns through the
LRU algorithms and will have cached all of the best data. The FS cache, if
you dump it out, will look a lot more like white noise with few discernable
patterns. The SAN cache even more so. The more single block reads you have,
the more like white noise it all looks. The liklihood of there being a
cache hit in the FS or SAN cache is relatively low. The advantage of direct
path reads significantly outweights the advantage of both of those caches.
It is worth noting in that on most SAN caches, if you specify that the LUN
is for a database it will disable read-ahead to pre-populate the cache as
it understands that it is not the best use of the cache (the general rule
is that SAN cache should be reserved exclusively for writes when the SAN is
used for the database.)
Note that these statements are generalisation, and that there may be
cases where your assertion is true but they will be an edge case and I
would recommend that you have a provable scenario to justify running in
that configuration.
Neil Chandler
Database Guy.
------------------------------
*Sent:* 31 October 2018 07:20
*To:* Andrew Kerber
*Subject:* Re: Storage choice for Oracle database on VMware
Thank you all for the valuable input!
Post by Stefan Koehler
what is the problem with direct I/O? You should never run an Oracle
database through page cache anyway :)
I'm not sure if direct I/O is always the best choice. I think that
certain workloads may benefit from the FS cache.
Anyway, I'm wondering why setall is still not the default value for
filesystemio_options on Linux (most probably because of the bugs with
certain filesystems and kernel versions).
Regards
Dimitre
Most places with growing databases and heavy duty environments on
vmware use ASM. Some use XFS or similar and LVM, though I am not fond of
those.
Asm is great when you plan correctly. If you don’t it’s very painful.
Eg. If you have different sized disks asm will be forever rebalancing, and
failing as there is not enough space on the odd disk. So you need to vacate
the diskgroup to rebuild it. (Yes, you know... not my fault, the previous
consultant did it...) If there’s an asm bug you may have to take an outage
on the Asm to apply the patch.
Normal disk operations like dd to asm is almost impossible. Trying to
find that corrupted data block on the asm disk takes great asm expertise
from a great oracle support engineer.
Those were some up of my worst asm nightmares. It was only 2 years ago.
I have since moved on...
Cheers,
Leng
Post by Stefan Koehler
Hello Dimitre,
what is the problem with direct I/O? You should never run an Oracle
database through page cache anyway :)
Post by Stefan Koehler
I would go with tweaked XFS (e.g. "nobarrier" as this information is
usually not passed through correctly with VMDKs on VMFS, etc.) if it is
just one single instance in this VM.
Post by Stefan Koehler
Best Regards
Stefan Koehler
Independent Oracle performance consultant and researcher
Website: http://www.soocs.de
Post by Radoulov, Dimitre
Thank you Chris, Matthew and Niall,
so the question is if performancewise ASM is worth it.
With the default Oracle database settings the I/O on XFS would be
synchronous, right?
Post by Stefan Koehler
Post by Radoulov, Dimitre
And if I understand correctly Note 1987437.1, on Linux you cannot
enable async I/O without turning on direct I/O too.
Post by Stefan Koehler
Post by Radoulov, Dimitre
Regards
Dimitre
--
http://www.freelists.org/webpage/oracle-l
--
http://www.freelists.org/webpage/oracle-l
--
Andrew W. Kerber
'If at first you dont succeed, dont take up skydiving.'
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail> Virus-free.
www.avast.com
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
<#m_-4085493524268073496_m_-5463440954123216382_m_8776224909084966304_m_-8563382580318212349_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
Michael Brown
2018-11-12 17:28:52 UTC
Permalink
You have to be careful with SETALL in ext3 and ext4, you can have serious issues based on your kernel version.
--
Michael Brown
***@michael-brown.org <mailto:***@michael-brown.org>
http://blog.michael-brown.org
I guess for good? I am wondering because I have no experience with SETALL in XFS, only used in ext3 and ext4 and SETALL works very good.
Thanks
Yes,
and in our environment it really makes a difference.
Regards
Dimitre
Hi Radoulov
Just wondering in youtr tests did you set FILESYSTEMIO_OPTIONS to SETALL in xfs?
Thanks
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail> Virus-free. www.avast.com <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail> <x-msg://23/#m_-5463440954123216382_m_8776224909084966304_m_-8563382580318212349_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
Hello all,
after a few quick tests on XFS and ASM (calibrate_io and swingbench) I see that direct and asynchronous I/O definitely make a difference.
Stefan and Neil, thank you for your suggestions!
Regards
Dimitre
Post by Matthew Parker
Radoulov,
The caching in the SGA understands your data usage patterns through the LRU algorithms and will have cached all of the best data. The FS cache, if you dump it out, will look a lot more like white noise with few discernable patterns. The SAN cache even more so. The more single block reads you have, the more like white noise it all looks. The liklihood of there being a cache hit in the FS or SAN cache is relatively low. The advantage of direct path reads significantly outweights the advantage of both of those caches. It is worth noting in that on most SAN caches, if you specify that the LUN is for a database it will disable read-ahead to pre-populate the cache as it understands that it is not the best use of the cache (the general rule is that SAN cache should be reserved exclusively for writes when the SAN is used for the database.)
Note that these statements are generalisation, and that there may be cases where your assertion is true but they will be an edge case and I would recommend that you have a provable scenario to justify running in that configuration.
Neil Chandler
Database Guy.
Sent: 31 October 2018 07:20
To: Andrew Kerber
Subject: Re: Storage choice for Oracle database on VMware
Thank you all for the valuable input!
Post by Stefan Koehler
what is the problem with direct I/O? You should never run an Oracle database through page cache anyway :)
I'm not sure if direct I/O is always the best choice. I think that certain workloads may benefit from the FS cache.
Anyway, I'm wondering why setall is still not the default value for filesystemio_options on Linux (most probably because of the bugs with certain filesystems and kernel versions).
Regards
Dimitre
Most places with growing databases and heavy duty environments on vmware use ASM. Some use XFS or similar and LVM, though I am not fond of those.
Asm is great when you plan correctly. If you don’t it’s very painful. Eg. If you have different sized disks asm will be forever rebalancing, and failing as there is not enough space on the odd disk. So you need to vacate the diskgroup to rebuild it. (Yes, you know... not my fault, the previous consultant did it...) If there’s an asm bug you may have to take an outage on the Asm to apply the patch.
Normal disk operations like dd to asm is almost impossible. Trying to find that corrupted data block on the asm disk takes great asm expertise from a great oracle support engineer.
Those were some up of my worst asm nightmares. It was only 2 years ago. I have since moved on...
Cheers,
Leng
Post by Stefan Koehler
Hello Dimitre,
what is the problem with direct I/O? You should never run an Oracle database through page cache anyway :)
I would go with tweaked XFS (e.g. "nobarrier" as this information is usually not passed through correctly with VMDKs on VMFS, etc.) if it is just one single instance in this VM.
Best Regards
Stefan Koehler
Independent Oracle performance consultant and researcher
Website: http://www.soocs.de <http://www.soocs.de/>
Post by Radoulov, Dimitre
Thank you Chris, Matthew and Niall,
so the question is if performancewise ASM is worth it.
With the default Oracle database settings the I/O on XFS would be synchronous, right?
And if I understand correctly Note 1987437.1, on Linux you cannot enable async I/O without turning on direct I/O too.
Regards
Dimitre
--
http://www.freelists.org/webpage/oracle-l <http://www.freelists.org/webpage/oracle-l>
--
http://www.freelists.org/webpage/oracle-l <http://www.freelists.org/webpage/oracle-l>
--
Andrew W. Kerber
'If at first you dont succeed, dont take up skydiving.'
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail> Virus-free. www.avast.com <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail> <x-msg://23/#m_-5463440954123216382_m_8776224909084966304_m_-8563382580318212349_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
Rich J
2018-11-12 17:52:55 UTC
Permalink
Post by Michael Brown
You have to be careful with SETALL in ext3 and ext4, you can have serious issues based on your kernel version.
--
Michael Brown
Are you talking about the corruption as listed in MOS 1487957.1? If so,
that's on kernel 2.6 and was patched 8+ years ago. Good to know, but
that context is important here.

Thanks,
Rich
n***@gmail.com
2018-11-12 17:54:24 UTC
Permalink
Could you give an example? I don't think I've seen such a thing on any
halfway current Linux kernel.
Post by Michael Brown
You have to be careful with SETALL in ext3 and ext4, you can have serious
issues based on your kernel version.
--
Michael Brown
http://blog.michael-brown.org
I guess for good? I am wondering because I have no experience with SETALL
in XFS, only used in ext3 and ext4 and SETALL works very good.
Thanks
Post by Radoulov, Dimitre
Yes,
and in our environment it really makes a difference.
Regards
Dimitre
Post by Ls Cheng
Hi Radoulov
Just wondering in youtr tests did you set FILESYSTEMIO_OPTIONS to SETALL in xfs?
Thanks
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail> Virus-free.
www.avast.com
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
Post by Radoulov, Dimitre
Hello all,
after a few quick tests on XFS and ASM (calibrate_io and swingbench) I
see that direct and asynchronous I/O definitely make a difference.
Stefan and Neil, thank you for your suggestions!
Regards
Dimitre
Radoulov,
The caching in the SGA understands your data usage patterns through the
LRU algorithms and will have cached all of the best data. The FS cache, if
you dump it out, will look a lot more like white noise with few discernable
patterns. The SAN cache even more so. The more single block reads you have,
the more like white noise it all looks. The liklihood of there being a
cache hit in the FS or SAN cache is relatively low. The advantage of direct
path reads significantly outweights the advantage of both of those caches.
It is worth noting in that on most SAN caches, if you specify that the LUN
is for a database it will disable read-ahead to pre-populate the cache as
it understands that it is not the best use of the cache (the general rule
is that SAN cache should be reserved exclusively for writes when the SAN is
used for the database.)
Note that these statements are generalisation, and that there may be
cases where your assertion is true but they will be an edge case and I
would recommend that you have a provable scenario to justify running in
that configuration.
Neil Chandler
Database Guy.
------------------------------
*Sent:* 31 October 2018 07:20
*To:* Andrew Kerber
*Subject:* Re: Storage choice for Oracle database on VMware
Thank you all for the valuable input!
Post by Stefan Koehler
what is the problem with direct I/O? You should never run an Oracle
database through page cache anyway :)
I'm not sure if direct I/O is always the best choice. I think that
certain workloads may benefit from the FS cache.
Anyway, I'm wondering why setall is still not the default value for
filesystemio_options on Linux (most probably because of the bugs with
certain filesystems and kernel versions).
Regards
Dimitre
Most places with growing databases and heavy duty environments on
vmware use ASM. Some use XFS or similar and LVM, though I am not fond of
those.
Asm is great when you plan correctly. If you don’t it’s very painful.
Eg. If you have different sized disks asm will be forever rebalancing, and
failing as there is not enough space on the odd disk. So you need to vacate
the diskgroup to rebuild it. (Yes, you know... not my fault, the previous
consultant did it...) If there’s an asm bug you may have to take an outage
on the Asm to apply the patch.
Normal disk operations like dd to asm is almost impossible. Trying to
find that corrupted data block on the asm disk takes great asm expertise
from a great oracle support engineer.
Those were some up of my worst asm nightmares. It was only 2 years ago.
I have since moved on...
Cheers,
Leng
Post by Stefan Koehler
Hello Dimitre,
what is the problem with direct I/O? You should never run an Oracle
database through page cache anyway :)
Post by Stefan Koehler
I would go with tweaked XFS (e.g. "nobarrier" as this information is
usually not passed through correctly with VMDKs on VMFS, etc.) if it is
just one single instance in this VM.
Post by Stefan Koehler
Best Regards
Stefan Koehler
Independent Oracle performance consultant and researcher
Website: http://www.soocs.de
Post by Radoulov, Dimitre
Thank you Chris, Matthew and Niall,
so the question is if performancewise ASM is worth it.
With the default Oracle database settings the I/O on XFS would be
synchronous, right?
Post by Stefan Koehler
Post by Radoulov, Dimitre
And if I understand correctly Note 1987437.1, on Linux you cannot
enable async I/O without turning on direct I/O too.
Post by Stefan Koehler
Post by Radoulov, Dimitre
Regards
Dimitre
--
http://www.freelists.org/webpage/oracle-l
--
http://www.freelists.org/webpage/oracle-l
--
Andrew W. Kerber
'If at first you dont succeed, dont take up skydiving.'
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail> Virus-free.
www.avast.com
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
Mladen Gogala
2018-11-14 05:03:21 UTC
Permalink
Believe it or not, I have ran into Oracle 9i on RH 4.x  less then a year
ago. I wouldn't qualify those as even "halfway current", I would qualify
those as Smithsonian exhibits and immediately call Ben Stiller.

Regards
Post by n***@gmail.com
Could you give an example? I don't think I've seen such a thing on any
halfway current Linux kernel.
--
Mladen Gogala
Database Consultant
Tel: (347) 321-1217

--
http://www.freelists.org/webpage/oracle-l
Mladen Gogala
2018-11-14 05:01:00 UTC
Permalink
Agreed. I recommend XFS, which is the default for RH 7.x.

Regards
Post by Michael Brown
You have to be careful with SETALL in ext3 and ext4, you can have
serious issues based on your kernel version.
--
Michael Brown
http://blog.michael-brown.org
I am Mladen Gogala and I approved of this message.
--
Mladen Gogala
Database Consultant
Tel: (347) 321-1217
Mladen Gogala
2018-11-01 14:50:21 UTC
Permalink
Why are you not fond of XFS and/or LVM? What would be the great
advantage of ASM? You have 2 GB RAM too much? I am trying to avoid ASM
whenever I can. And so is Oracle. New ODA and Exadata boxes come with
Oracle data files on ACFS file system. Personally, I think that ASM is a
pain in the neck or lower and shouldn't be used unless the database is RAC.

Regards
Post by Andrew Kerber
Most places with growing databases and heavy duty environments on
vmware use ASM.  Some use XFS or similar and LVM, though I am not fond
of those.
Asm is great when you plan correctly. If you don’t it’s very
painful. Eg. If you have different sized disks asm will be forever
rebalancing, and failing as there is not enough space on the odd
disk. So you need to vacate the diskgroup to rebuild it. (Yes, you
know... not my fault, the previous consultant did it...) If
there’s an asm bug you may have to take an outage on the Asm to
apply the patch.
Normal disk operations like dd to asm is almost impossible. Trying
to find that corrupted data block on the asm disk takes great asm
expertise from a great oracle support engineer.
Those were some up of my worst asm nightmares. It was only 2 years
ago. I have since moved on...
Cheers,
Leng
Post by Stefan Koehler
Hello Dimitre,
what is the problem with direct I/O? You should never run an
Oracle database through page cache anyway :)
Post by Stefan Koehler
I would go with tweaked XFS (e.g. "nobarrier" as this
information is usually not passed through correctly with VMDKs on
VMFS, etc.) if it is just one single instance in this VM.
Post by Stefan Koehler
Best Regards
Stefan Koehler
Independent Oracle performance consultant and researcher
Website: http://www.soocs.de
Post by Radoulov, Dimitre
Thank you Chris, Matthew and Niall,
so the question is if performancewise ASM is worth it.
With the default Oracle database settings the I/O on XFS would
be synchronous, right?
Post by Stefan Koehler
Post by Radoulov, Dimitre
And if I understand correctly Note 1987437.1, on Linux you
cannot enable async I/O without turning on direct I/O too.
Post by Stefan Koehler
Post by Radoulov, Dimitre
Regards
Dimitre
--
http://www.freelists.org/webpage/oracle-l
--
http://www.freelists.org/webpage/oracle-l
--
Andrew W. Kerber
'If at first you dont succeed, dont take up skydiving.'
--
Mladen Gogala
Database Consultant
Tel: (347) 321-1217
Mladen Gogala
2018-10-31 22:50:40 UTC
Permalink
Nobarrier option for XFS is deprecated:

https://patchwork.kernel.org/patch/10487561/
Post by Stefan Koehler
I would go with tweaked XFS (e.g. "nobarrier" as this information is usually not passed through correctl
--
Mladen Gogala
Database Consultant
Tel: (347) 321-1217

--
http://www.freelists.org/webpage/oracle-l
Neil Chandler
2018-10-30 18:20:11 UTC
Permalink
Personally I prefer ASM with OMF even on single instance at you get restart and other nice commands through srvctl, like adding services.

If I was using a file system, I’d pick xfs over ext4.

Neil.
sent from my phone

On 30 Oct 2018, at 18:08, Chris Taylor <***@gmail.com<mailto:***@gmail.com>> wrote:

After dealing with multiple filesystems of very busy databases for the last year (one of which is 70+TB), I would say ASM ALL THE WAY. Never go back to filesystems after using ASM.

Adding space, managing space is so much better on ASM.

Just my $0.02.

Chris

On Tue, Oct 30, 2018 at 12:15 PM Radoulov, Dimitre <***@gmail.com<mailto:***@gmail.com>> wrote:
Hi all,

considering the technology today: XFS as default FS on Linux 7, the modern HW and the latest VMware versions, what storage type would you use for a single database instance on VMware?
Oracle versions are from 11.2 to 12.2, database size varies from 300G to 3T+.

Would you use XFS with non-default values for filesystemio_options or ASM?

Regards
Dimitre
Loading...