Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

enabling IDE DMA



	Good to see you at the InstallFest this last Saturday.  I like
	to go since I always seem to learn something, just by hanging
	around.

	As I was leaving I just happened to overhear you telling
	someone that you always recommend as part of a Linux
	installation that the installer enable DMA for the IDE drives
	by regenerating the kernel.  I was curious to see if there
	might not be an easier way, since most users probably don't
	want to muck around with the kernel.

	I was also curious to see if my systems were using DMA with
	the IDE disks, since I've almost always found computers to be
	IO bound for my applications.

	It looks like kernels can be configured to enable DMA for
	hardware that supports it by setting the CONFIG_IDEDMA_AUTO
	macro during configuration.  Apparently, most distributions
	don't turn this feature on for their stock kernels.  At least,
	the three I checked (Mandrake 7.2, RedHat 7.0, SuSE 7.1)
	didn't have it turned on.
	
	I've found two different techniques that can be used to enable
	DMA for the disks, with the 2.2 and 2.4 kernels, when the
	kernel hasn't been configured for auto enable.

	One is to include the boot prompt argument "idex=dma", where

	 * "idex=" is recognized for all "x" from "0" to "3", such as "ide1".

	This is functionally equivalent to defining CONFIG_IDEDMA_AUTO
	during kernel configuration.  The 'hwif->autodma' flag in the
	IDE driver gets set, and queries of the drive and controller
	chip set determine whether DMA is used with the drive.

	A better way is to use hdparm.  It can enable DMA while the
	system is running, and so can be added to a start up script,
	e.g., rc.local.  Then command would be something like

	      hdparm -d1 /dev/hda

	Whenever DMA is enabled, it is turned off by the system if
	excessive errors occur while accessing the drive.

	You can determine if DMA is currently enabled for a particular
	drive by executing a command like:

		cat /proc/ide/hda/settings

	and look for the 'using_dma' line.

	The computer I have at home doesn't work in DMA mode at all.
	The IDE driver kept turning DMA off.  This computer is not a
	newly manufactured system, but has relatively standard
	components (Intel Celeron, 440LX chipset, and an Intel IDE
	controller).  A query of /proc/pci revealed that the IDE
	controller billed itself as "Master Capable", and 'hdparm -i'
	indicates that the disk drive prefers UDMA mode 4.  I noticed
	that I had a standard 40 conductor IDE cable installed, and
	ran out to buy an 80 conductor UDMA cable.  It didn't help.
	when I enabled DMA, the disk chattered for a few seconds, and
	went silent.  The file /var/log/messages had lines in it like:


	kernel: hda: dma_intr: status=0x51 { DriveReady SeekComplete Error } 
	kernel: hda: dma_intr: error=0x84 { DriveStatusError BadCRC } 
	kernel: hda: DMA disabled 
	kernel: ide0: reset: success 


	It looked like there were changes to the IDE driver since the
	Linux version I was running was released.  There were new
	sections in the latest drivers that dealt with CPUs lacking
	cache snooping, so I upgraded my computer to use the 2.2.15
	kernel.  It still didn't work.  Perhaps there are enough
	machines out there that don't work with Linux's IDE driver in
	DMA mode that distribution companies leave DMA disabled by
	default to avoid dealing with calls from users about funny
	messages in system logs.

	But newer systems at the office responded quite well.  The
	'hdparm' program includes a performance test for the disk:

		 hdparm -t -T

		 Here are the results:


	==============================================================

	[root at nimue aoi]# /sbin/hdparm -t -T /dev/hda

	/dev/hda:
	 Timing buffer-cache reads:   128 MB in  0.69 seconds =185.51 MB/sec
	 Timing buffered disk reads:  64 MB in 18.94 seconds =  3.38 MB/sec

	[root at nimue aoi]# /sbin/hdparm -d1 /dev/hda

	/dev/hda:
	 setting using_dma to 1 (on)
	 using_dma    =  1 (on)

	[root at nimue aoi]# /sbin/hdparm -t -T /dev/hda

	/dev/hda:
	 Timing buffer-cache reads:   128 MB in  0.95 seconds =134.74 MB/sec
	 Timing buffered disk reads:  64 MB in  3.15 seconds = 20.32 MB/sec

	==============================================================

	[root at raijin glenn]# hdparm -t -T /dev/hda

	/dev/hda:
	 Timing buffer-cache reads:   128 MB in  4.86 seconds = 26.34 MB/sec
	 Timing buffered disk reads:  64 MB in 23.27 seconds =  2.75 MB/sec

	[root at raijin glenn]# hdparm -d1 /dev/hda

	/dev/hda:
	 setting using_dma to 1 (on)
	 using_dma    =  1 (on)

	[root at raijin glenn]# hdparm -t -T /dev/hda

	/dev/hda:
	 Timing buffer-cache reads:   128 MB in  3.71 seconds = 34.50 MB/sec
	 Timing buffered disk reads:  64 MB in  6.45 seconds =  9.92 MB/sec

	==============================================================

	But now this enlightenment is going to cost me.  I'll need to
	put $300 into my home system, to bring it up to snuff.	




-
Subcription/unsubscription/info requests: send e-mail with
"subscribe", "unsubscribe", or "info" on the first line of the
message body to discuss-request at blu.org (Subject line is ignored).




BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org