Home > ASE > directio,dsync & sync , async IO

directio,dsync & sync , async IO


I came across very conceptual article about the sybase device io.

If you have any questions and suggestions please let me know. Thanks.

Source: Sybase Blogs, www, sybooks, sybase white paper for direct io.


  • Asynchronous I/O allows the process issuing the I/O to continue executing while OS completes the request.
  • Dataserver process does not block on this request till its completion.


  • Synchronous I/O blocks the I/O issuing process from continuing executing while OS completes the request.
  • The process is blocked till the request is completed. This generally results in poor throughput.


  • In 12.0 sybase introduced the dsync flag – shorthand for “Data Synchronous.”
  • When the dsync setting is on, Adaptive Server opens a database device file using the UNIX dsync flag.
  • The dsync flag in ASE directly translates to the O_DSYNC open(2) flag. That is to say, when ASE opens a device that has the dsync flag set, ASE will pass O_DSYNC to open(2).
  • This flag tells the file system that a write to that file must pass though the cache and be written to disk before the write is considered complete.
  • In other words, for writes we throw away the cache efficiency and make sure the data goes to disk.
  • This way if the system crashes, everything that we thought had been written to disk has in fact been written to disk.

Is DSYNC “synchronous”??

  • This is not true.
  • This synchronous / asynchronous conflict is at a different level. With async i/o we are talking about the context in which the i/o is executed, i.e. whether the i/o blocks the caller or if it is done in a different context.
  • With dsync we are talking about when the write() is considered complete.
  • These are not mutually exclusive, and you can asynchronously do a data synchronous i/o.
  • The async portion is as always: the application issues an i/o and later polls (or is notified) for completion. The dsync portion means that the application won’t be told that the I/O has completed until the data has made it to disk.


  • Direct I/O is another kind of file system i/o, introduced in ASE15.
  • In the direct i/o model the file system cache is completely bypassed.
  • Using direct i/o, writes are naturally safe because they bypass the cache and go straight to disk.
  • Direct i/o is very, very similar to raw i/o.
  • The main difference is that in direct i/o the structure of a file system still exists, easing manageability.
  • With raw i/o, no file system exists. The performance of raw and direct i/o should be very similar.

Also like to add :

  • The dsync and directio are mutually exclusive. Both cannt be turn on same time on the device. Both dsync and directio provide the full recoveribility.
  • If you gone through the full article, the next question is, which would be best in raw device and filesystem device with direct io, both are bypassing the file system cache.
Check the Sybase Wiki @ sybasewiki.com
Categories: ASE Tags: ,
  1. Sa
    January 13th, 2011 at 12:57 | #1

    The Dba is using directio fat all the devices except the tempdb file. They have set dsync and directio off for the tempdb device. They have also created a 9gb named tempdb_cache. We have a query that is filling the tempdb cache and then causing the OS CPU load to go over 2000 resulting in an unresponsive OS

    Should tempdb device have dsync and directio off with large named tempdb cache? If we turn on directio for the tempdb device, the CPU load does not occur.

    • Simon
      January 19th, 2011 at 06:55 | #2

      Interesting sa, we are investigating turning dsync and directio off for tempdb at the moment.

      Just on a tangent, if I/O is causing CPU load like that are you sure your disks are configured correctly? I/O generally should not be CPU bound, except in very old modes…. perhaps your disks have errors and are scaling back to a slower/more reliable/CPU bound mode.

      Please keep me posted here, as Id be interested as to whether directio solves your issue without degrading performance too much.


      • January 19th, 2011 at 21:11 | #3

        Our storage controller is idle during the issue, so it’s not a physical issue…

        We are narrowing down the issue to what appears to be a “disk I/o structures” ASE settings issue. Sysmon is reporting that IO is being delayed due to the number of disk I/o structures and recommends increasing this value.

        The dba set the “disk i/o structures” setting to 8k and “number of devices” to 128 from the default. Since we dsync and directio off, i believe that we are running into an issue where too many asynchronous io threads are created causing CPU overload and because ASE limits the number of structures io is backing inside of ASE.

        If we enable directio for the tempdb device, we do not have the CPU overload issue and no performance issue. This is because with directio the OS is bypassed and ASE doesn’t use the structures and ASE waits for the io to complete.

        I believe that we simple have a ASE 15 tuning issue where the wrong configuration can cause major issues in some situations.

        We are also using a large 10gb tempdb_cache, and we have 64gb memory free on the server, so my question is why wouldn’t we just increase the tempdb_cache to prevent any physical io to 16gb tempdb device?

  1. August 22nd, 2010 at 06:26 | #1
  2. December 24th, 2010 at 01:28 | #2
  3. January 2nd, 2011 at 16:13 | #3