Hướng dẫn cover FAT32 => NTFS?????

hình như cách này ko dùng được cho ổ đĩa ngoài (external hard drive)
Ko biết có cách nào convert ko nhỉ ?
 
Tớ dùng cmd convert dc 2 ổ D và E rồi, còn ổ C khi convert nó cứ bảo chọn Y/N gì gì đó, ko dám ấn liều, ai convert dc ổ C rồi chỉ tớ với .
 
cứ đọc hướng dẫn của nó mà làm thôi
hình như là cứ y mà tiến
 
convert ổ C xong, máy chậm hẳn đi :|

Chuyển lại ko mất dữ liệu thì thế nào nhỉ ?
 
ặc máy tui convert xong chạy ào ào chậm là chậm như thế lào ?
 
sặc, ai nói ko cần defrag vậy :|, chuyển sang NTFS rồi nó vẫn bị phân mãnh ầm ầm đó thui :|
 
Giờ này ai lại vào dos? Có cái cd ghost là xong, nhẹ thì lôi hiren's boot ra là ổn. Bạn nên để NTFS hơn là Fat, với lại còn compress ổ đĩa cũng không nên luôn ::)

Thế hư CD thì sao :'>

Nếu ai rành thì tích hợp cả Menu tự Ghost và cả Hirenboot vào menu boot, tiện lợi :x

DOS vẫn tồn tại, nó rất có ích nếu hiểu biết về cách dùng nó :>
 
Vẫn bị phân mảnh nhưng vấn đề là tốc độ không ảnh hưởng :>

DrNil/The myth of defragmentation nói:
I've been meaning to write something about this topic, and since I replied on another forum (pleasuredome) to the unending question of the benefits of defragmentation regarding NTFS volumes, I thought it was time to post here what I think about the topic. What I think is backed up by testing and by modern technology, it isn't just something I simply pulled out of an invisible hat.

Well, the point is that defragmenting heavily out of shape huge file servers *will* make a difference, however, for most users (workstations), defragmenting NTFS volumes will only show results (and negligible at that) in benchmarks, not in real life usage.

Fragmentation makes a negligible difference to the majority of file accesses, as most file accesses aren't split across a fragment, even if the file is fragmented, and if you're accessing a contiguous part of a file, it doesn't matter how many fragments the rest of the file is in. Watching PerfMon for "split IOs" on your disk will show this. You won't find very many, not at least as a proportion of the total IOs on the disk. Even though a file may be "fragmented", most of the fragments would have to be smaller than 64KBytes (the NT's default transfer size) for it to matter.

The fact is that putting frequently-accessed files in LCNs that are numerically "near" each other, WILL put those files physically near to each other on the disk. And putting them into low-numbered LCNs will put them into outer cylinders (hence faster sequential trasnfer rates) than higher-numbered LCNs.

First, increasing LCNs map to increasing sector addresses, and second, physical block numbers break down into three pieces: Cylinder number (seek position, 0 at the outer edge), head number (aka track or surface number), and sector within track. Now here is the critical point: I've listed those in sequence from "most significant" to "least significant". Meaning that if you start reading a disk partition at LCN 0 and work up, you'll find that it reads all the sectors on surface 0, track 0... then surface 1... then surface 2... and doesn't change tracks until all sectors on all surfaces of the first track have been read.

The reason they do it this way -- and the reason I can be so confident that nobody maps LCNs to physical sector addresses any other way -- is simple: Switching heads is faster than changing cylinders. For obvious reasons: Switching heads is electrical, while changing cylinders is mechanical. You want to do as little as possible of the slowest things. So the cylinders change "most slowly" as the LCN goes up.

While it's true that (due to zone bit mapping) we don't know exactly where the track boundaries are, we CAN guarantee that lower numbered LCNs are going to be "outward" of (or, at worst, in the same cylinder as) higher numbered LCNs.

Ergo: If you want to optimize file placement for sequential transfer rates... not that that's necessarily the right thing to do!... you want to put the files in the lowest LCNs available. It doesn't matter how many platters there are, nor how many sectors there are per track, nor whether it's the first, last, or "other" partition in the disk. Lower LCNs are always in outer cylinders.

What about placing files "near" each other? Well, it's true that we can't guarantee that two sequential LCNs are in fact adjacent on the disk, nor know when the track or cylinder boundaries are... at least not from the file system level. Two sequential LCNs might be on different surfaces, or even on adjacent tracks. But we CAN guarantee that they'll NEVER be farther apart than that. And given that there are not that many tracks, and quite a huge number of LCNs, the number of pairs of sequential LCNs that actually are split across track boundaries is a vanishingly small fraction of the total number.

To conclude, the pros are a negligible number on a benchmark test, the cons? well, why waste your time?
Bonus

DrNil/Virtual Memory and the Myth of Fragmentation nói:
On Virtual Memory

Virtual memory was once an "overflow" space for main RAM. Back then, we had between 2MB and 16MB (depending on both platform and processor) and the MMU was maybe a seperate chip. I've got a 68851 knocking about somewhere.

An MMU is a bit of magic. It can remap addresses to absolutely anywhere. So if I have a patch of RAM at some forsaken longword hex address, I can use the MMU to throw that RAM out onto disk and, to the process which owns the RAM, not a thing has changed. But I now have some free RAM I can use.
That's swapping. You shouldn't really do it unless you want to do stuff that overflows your memory capacity. Bear in mind here that swapping came in on the desktop when the best CPU was about 33MHz. $200 might get you a 33MHz 68RC030 with built in MMU. RAM density was low and price was high. We were years away from getting 1MB for less than $1. Heck, we were barely getting 1MB for $40. Hard disks hadn't even hit 1MB/$1. CPUs were slow. To do a batch operation on ten 1024x768 images was a huge undertaking. To render a scene in Lightwave or 3DStudio was to leave your machine hard at work for days. Performance wasn't the issue. RAM capacity was. Swapping got us more RAM capacity and It Was Good. Sort of.

But swapping is wasteful, inefficient and tends to fragment memory (yes, this was a problem) as well as being very difficult to keep track of on a heavily multitasking system. So the OS had to defragment memory (pretty simple, just swap from one place in RAM to another) as well. The overhead built up pretty high. One of the reasons why Chicago was so damn unstable. When swapping, it's much better to have a stack of RAM and not have to swap at all.

The other option is paging. Rather than use the HD (often an entire partition or even an entire drive) as overflow space and stick chunks of RAM on it, you'd take that HD space and actually map it into the CPU's virtual addressing.
Remember how I said that the MMU can throw bits of RAM about? Well, this also applies to any address anywhere in the system as long as the CPU finds it addressable as memory. So we can move pages around rather than just chunks of RAM (a page may be a chunk of RAM, but it also may not be) to and from the HD. We can move those pages or assign them anywhere. Run a program and the PE (exe, dll, etc.) files it uses are actually in the CPU's address space under Win32. Physical RAM is in the CPU's address space. A portion of the HD (the page file) is in the CPU's address space. This is virtual addressing and how paging works. It isn't swapping. The OS can keep an eye on what page is where, how often it's used, what process it belongs to and move it to wherever's most appropriate. Heck, virtual memory can be allocated by a process without that memory existing anywhere at all. As soon as it's used it springs into existance. Unsure if NT does this or not, though.
Swapping either can't do these things or doesn't do them very well. Paging is not "overflow" RAM. You may well have several hundred MB of used page file with a gigabyte of free RAM. There are things in the page file which simply don't belong in RAM anyway. Things that should be addressible, but don't have to be in fast and at-a-premium main memory.

This is most of the reason why NT blows 95, 98 and ME clean out of the water when running applications. It just handles memory better by design. In order to do this, it has to have the full paging model. This includes a substantial page file.

Crapping on the page file just takes us back fifteen years and forces the OS to deal with only physical RAM. If you don't know or aren't sure, let the OS handle it. It knows what it's doing better than you. The default settings for 2k and XP are, like most other inner-working defaults, the best for 95% of everybody. The other 5% knows exactly what they're doing and are probably hacking the OS to bits anyway.

The Myth of Defragmentation

All file IO, unless it bypasses the file cache, is likewise done in chunks never larger than 192 kbytes and rarely larger than 64 kbytes. In between which chunks, your system is almost always moving the heads to access some other files. This is why it takes fairly unusual circumstances to see action performance impacts from fragmentation. And in fact, the more multitasking you're doing, the less effect fragmentation has.

It's easy to run tests (as all of the defragger vendors have done) that show otherwise, but these are invariably done with a single-threaded workload that's accessing just one file sequentially (that is, from end to end). That isn't how most of us use our machines any more.

A benefit you often DO get after a defrag run is that all the free space on the volume is contiguous, or much more so than it was before... resulting in all of the files being closer together than they were before. This cuts down on seek time among the parts of the various files. The fact that the files are contiguous isn't really what made things go faster, but it looks that way, and besides, it's a lot easier to explain and sell "defragmentation" than "file placement optimization".

So, the argument is that, except in extreme cases, fragmentation does not result in significantly more frequent seeks, nor significantly longer seeks.

The former can be measured by looking at the split IO rate as a fraction of the total IO rate to a given physical disk. If it doesn't get lower after defragmenting, defragmenting didn't help.

The latter can be measured by things like the IO request rate, the depth of the request queue to the disk, and the time per IO request (which is not just the inverse of the request rate), under high load periods. If these don't get better after defragmenting, defragmenting didn't help.

IMO defragmenting is overrated, and many users tend to think that defragmenting their drives will make their systems faster, when in reality only in extreme cases (and particularlt if you are running a server) this would be true. On other instances the difference will be so negligible, that the "myth" of defragmenting will become just a placebo effect. Cheers.
 
thế conver lại có được ko NTFS=> FAT32
Được.
Khuyên bạn để ổ C là FAT32 đi để khi hư vô DOS xử lý cho dễ (NTFS cũng được nhưng DOS phải hỗ trợ, DOS XP vẫn chưa hỗ trợ, phải dùng thêm đĩa boot). Mấy ổ khác NTFS thoải mái.
 
Được.
Khuyên bạn để ổ C là FAT32 đi để khi hư vô DOS xử lý cho dễ (NTFS cũng được nhưng DOS phải hỗ trợ, DOS XP vẫn chưa hỗ trợ, phải dùng thêm đĩa boot). Mấy ổ khác NTFS thoải mái.

Chỉ có tác dụng khi dùng Windows XP trở xuống, VISTA bắt buộc cài trên phân vùng NTFS
 
bạn có thể hướng dẫn chi tiết cho mình được không :D

Nếu ổ C ở định dạng NTFS thì dùng Partition Magic trong đĩa hirent để format nó về fat32 rồi vào dos của hirent gõ lệnh format x: /s/q (x là tên phân vùng C sau khi format được Dos nhận dạng). sau đó cài win bình thường (không format lại), cài xong hết thì copy file Ghost.uha trong đĩa Hirent vào thư mục gốc đĩa khởi động. Để tạo file ghost và bung file ghost thì sau khi mở PC nhấn liên tục phím F8 rồi chọn Menu boot, chọn dòng 2 rồi sẽ xuất hiện màn hình dos đánh GHOST thì chương trình ghost xuất hiện

Chỉ có tác dụng khi dùng Windows XP trở xuống, VISTA bắt buộc cài trên phân vùng NTFS

NTFS cũng được nhưng DOS phải hỗ trợ
 
hình như cách này ko dùng được cho ổ đĩa ngoài (external hard drive)
Ko biết có cách nào convert ko nhỉ ?

có ai biết cách convert external hard drive từ FAT32 qua NTFS mà ko cần format ko ? ::)
 
Back
Top