View previous topic :: View next topic |
Author |
Message |
Cygon Tux's lil' helper
Joined: 05 Feb 2006 Posts: 115 Location: Germany
|
Posted: Fri Oct 27, 2023 4:43 pm Post subject: Since Kernel+GCC update, MD-RAID refuses to write |
|
|
EDIT/UPDATE: I initially thought this was a VirtualBox issue and posted it under "Desktop Environments," but the problem is much broader, so I'm reposting this in "Kernel & Hardware"
Summary: yesterday, I did an `emerge --sync` and `emerge --update` that seems to have conjured up major trouble on my system.
State that worked fine for weeks:
- sys-devel/gcc-13.2.1_p20230826
- sys-kernel/gentoo-sources 6.1.53-r1
State where nothing gets written to the disks of my MD-RAID partitions.
- sys-devel/gcc-13.2.1_p20231014
- sys-kernel/gentoo-sources-6.1.57
https://forums.gentoo.org/posting.php?mode=editpost&p=8805742
I tried to go back to kernel 6.1.53-r1 to narrow the issue down, but that one doesn't even have the 'amd64' or '~amd64' keywords anymore in the current portage tree. Maybe I'll add it back in, manually, but I'm sure the maintainers removed the keywords for a reason...
I have a SuperMicro motherboard with a "fake RAID" chip (Intel VROC). It's essentially a boot-time-configurable MD-RAID: I have a boot menu where I can create and destroy RAID partitions and when I boot to Windows w/installed Intel VROC driver or to Linux w/installed MD-RAID, it automatically assembles the software RAID.
The problem I'm now having asserts itself like this:
Since the `emerge --update`, nothing is ever written to the drives in this RAID anymore. It mounts, I can read from it and I can even write a few Gigabytes to it (that all end up in the disk cache), but not a single byte is ever committed to disk.
If I type `sync` to flush the disk cache, or reach roughly 5 Gigabytes of cached writes, write accesses on all partitions suddenly hang indefinitely. I assume the kernel is waiting for some write queue that has gotten too full, but I don't know the exact inner workings. At that point, the system is quite unusable, I can't even reboot anymore, I can only press the reset button to bring it back into a usable state.
I've now compiled kernel 6.1.57 with the latest stable GCC (sys-devel/gcc-13.2.1_p20230826) and the issue is still there. Weirdly, the current Gentoo Live USB image runs kernel 6.1.57, also compiled with gcc-13.2.1_p20230826, and it cleanly detects, assembles and writes to the MD-RAID partitions without issue.
I'll now try booting an older kernel that still lingers on my boot partition, but if that doesn't work, I'm at a loss at what might cause it. Could other packages that arrived with yesterday's `emerge --update` be to blame?
Last edited by Cygon on Sat Oct 28, 2023 4:58 pm; edited 2 times in total |
|
Back to top |
|
|
Cygon Tux's lil' helper
Joined: 05 Feb 2006 Posts: 115 Location: Germany
|
Posted: Fri Oct 27, 2023 4:58 pm Post subject: |
|
|
I'm back on
Code: | # cat /proc/version
Linux version 6.1.46-gentoo (root@tiamat) (gcc (Gentoo 13.2.1_p20230826 p7) 13.2.1 20230826, GNU ld (Gentoo 2.40 p5) 2.40.0) #3 SMP Wed Sep 27 13:32:01 CEST 2023 |
which is an old kernel I compiled a month ago and which I used for weeks without issue.
Annoyingly, the issue persists. So I guess it must be related to one of the other packages I emerged yesterday. It emerged an unstable GCC release, perhaps that is to blame?
https://pastebin.com/r711aMAk
I'm writing this in a browser with my system already semi-locked up again, after copying a 2 GiB test file on a RAID partition and running `sync`. `dmesg` shows nothing going on. `/proc/mdstat` shows the RAID array assembled and running without issues.
Any hints on what this could be would be appreciated. I'll now re-emerge all the packages from yesterday's update. |
|
Back to top |
|
|
NeddySeagoon Administrator
Joined: 05 Jul 2003 Posts: 54799 Location: 56N 3W
|
Posted: Fri Oct 27, 2023 5:17 pm Post subject: |
|
|
Cygon,
What does dmesg show after a small test write?
Put the whole thing onto a pastebin please.
What about
and
Code: | $ sudo mdadm -E /dev/sda... | where sda...[ meanse of of the block devices holding your raid set.
I'm not sure that its a thing with fakeraid.
You raid system can be read only at several levels.
The file system level, the mdadm raid level or even the block device levet. _________________ Regards,
NeddySeagoon
Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail. |
|
Back to top |
|
|
Cygon Tux's lil' helper
Joined: 05 Feb 2006 Posts: 115 Location: Germany
|
Posted: Fri Oct 27, 2023 6:07 pm Post subject: |
|
|
I already checked `/proc/mdstat` and saw nothing different from how it was during all the weeks (years!) it worked:
Code: | # cat /proc/mdstat
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md124 : active raid1 sdb[0]
3825205248 blocks super external:/md125/0 [2/1] [U_]
md125 : inactive sdb[0](S)
5201 blocks super external:imsm
md126 : active (auto-read-only) raid1 sdc[1] sdd[0]
3892314112 blocks super external:/md127/0 [2/2] [UU]
md127 : inactive sdc[1](S) sdd[0](S)
10402 blocks super external:imsm
unused devices: <none> | The mdraid in question is md126 w/md127 as a sub-array. Please ignore the degraded md124+md125 array, I disconnected one of its drives in July/August intentionally.
The output of `mdadm --examine` is as follows:
Code: | # mdadm --examine /dev/md127
/dev/md127:
Magic : Intel Raid ISM Cfg Sig.
Version : 1.3.00
Orig Family : 604fee35
Family : 604fee35
Generation : 0000a122
Creation Time : Unknown
Attributes : All supported
UUID : 8bb95e02:351acd75:413d1882:03c163b4
Checksum : 9681c710 correct
MPB Sectors : 1
Disks : 2
RAID Devices : 1
Disk01 Serial : S758NS0W603165B
State : active
Id : 00000003
Usable Size : 7814026766 (3.64 TiB 4.00 TB)
[SSDs]:
Subarray : 0
UUID : 560106ea:70eff5d8:1b71434b:da6be36b
RAID Level : 1
Members : 2
Slots : [UU]
Failed disk : none
This Slot : 1
Sector Size : 512
Array Size : 7784628224 (3.63 TiB 3.99 TB)
Per Dev Size : 7784630272 (3.63 TiB 3.99 TB)
Sector Offset : 0
Num Stripes : 30408704
Chunk Size : 64 KiB
Reserved : 0
Migrate State : idle
Map State : normal
Dirty State : clean
RWH Policy : off
Volume ID : 1
Disk00 Serial : S758NS0W603225F
State : active
Id : 00000001
Usable Size : 7814026766 (3.64 TiB 4.00 TB)
# mdadm --examine /dev/md126
/dev/md126:
MBR Magic : aa55
Partition[0] : 4294967295 sectors at 1 (type ee) | There are three partitions on the software RAID:
Code: | # ls /dev/md/ -lh
total 0
lrwxrwxrwx 1 root root 8 Oct 27 19:03 HDDs_0 -> ../md124
lrwxrwxrwx 1 root root 10 Oct 27 19:03 HDDs_0p1 -> ../md124p1
lrwxrwxrwx 1 root root 10 Oct 27 19:03 HDDs_0p2 -> ../md124p2
lrwxrwxrwx 1 root root 10 Oct 27 19:03 HDDs_0p3 -> ../md124p3
lrwxrwxrwx 1 root root 8 Oct 27 19:03 imsm0 -> ../md127
lrwxrwxrwx 1 root root 8 Oct 27 19:03 imsm1 -> ../md125
lrwxrwxrwx 1 root root 8 Oct 27 19:03 SSDs_0 -> ../md126
lrwxrwxrwx 1 root root 10 Oct 27 19:03 SSDs_0p1 -> ../md126p1
lrwxrwxrwx 1 root root 10 Oct 27 19:03 SSDs_0p2 -> ../md126p2
lrwxrwxrwx 1 root root 10 Oct 27 19:03 SSDs_0p3 -> ../md126p3 | (The SSDs_0p* partitions are the ones in question.)
And straight out of my `/etc/fstab`:
Code: | # RAID partitions
#
/dev/md/HDDs_0p1 /srv/oldgames ntfs auto,noatime,uid=cygon,gid=users,dmask=0002,fmask=0002 0 0
/dev/md/HDDs_0p2 /srv/olddevel ntfs auto,noatime,uid=cygon,gid=users,dmask=0002,fmask=0002 0 0
/dev/md/HDDs_0p3 /srv/oldarchive ntfs auto,noatime,uid=cygon,gid=users,dmask=0002,fmask=0002 0 0
/dev/md/SSDs_0p1 /srv/games ntfs auto,noatime,uid=cygon,gid=users,dmask=0002,fmask=0002 0 0
/dev/md/SSDs_0p2 /srv/devel ntfs auto,noatime,uid=cygon,gid=users,dmask=0002,fmask=0002 0 0
/dev/md/SSDs_0p3 /srv/archive ntfs auto,noatime,uid=cygon,gid=users,dmask=0002,fmask=0002 0 0 |
Unfortunately, there is nothing at all appearing in `dmesg` when the disk writes hang. I'll post this, repeat my test and reply again with the contents of `dmesg` just so I don't risk losing this entire post now
Observations I made:
- Booting the Gentoo Live USB image (with kernel 6.1.57 compiled by the exact same GCC version) it assembles the MD-RAIDs and writes reach the disks without any issues.
- Dual-booting into my Windows 10 system with the Intel VROC RAID driver installed (that's Intel's MD-RAID clone for Windows) is also able to write without any issues.
- When I state that it doesn't actually write to disk, I mean that I am monitoring the /dev/sdc and /dev/sdd devices via KSysGuard (aka block device level for the underlying physical drives). When mdadm has to do a resync, for example, I see one drive reading and one drive writing.
|
|
Back to top |
|
|
Cygon Tux's lil' helper
Joined: 05 Feb 2006 Posts: 115 Location: Germany
|
Posted: Fri Oct 27, 2023 6:25 pm Post subject: |
|
|
I repeated the test and gave it a few minutes to write or fail or something else (but the `sync` is still stuck even as I'm writing this).
Here's what KSysGuard monitoring my disk cache and both drives shows:
https://imgur.com/a/fSGXnRp
As you can see, the file copy even reads from both drives, but doesn't ever write. It all collects in the disk cache and stays there.
Here's the output of `dmesg`, captured after the write was hanging for several minutes already. The last line with the `nfs` mount is shortly after the write started (I mounted a network drive so I could save the screenshot to make sure I would be able to save it).
https://pastebin.com/t4ZePCSG |
|
Back to top |
|
|
NeddySeagoon Administrator
Joined: 05 Jul 2003 Posts: 54799 Location: 56N 3W
|
Posted: Fri Oct 27, 2023 6:48 pm Post subject: |
|
|
Cygon,
Code: | # RAID partitions
#
/dev/md/HDDs_0p1 /srv/oldgames ntfs auto,noatime,uid=cygon,gid=users,dmask=0002,fmask=0002 0 0 |
Which ntfs driver are you using?
That's the old kernel ntfs driver which does not do what you think it does. Its essentially read only.
It's very broken write capabilities were reduced to writing to existing files only, then only providing that the the file size did not change.
Then there was ntfs-3g which is a fuse filesystem.
Most recently, there is a kernel version of ntfs-3g which does not have the overhead of FUSE so its faster.
ntfs is the old mostly read only driver.
ntfs3 is the kernel fully functional ntfs driver.
ntfs3g or is its ntfs-3g is the FUSE ntfs driver.
Check your kernel for ntfs3 support and use ntfs3 for the filesystem type.
dmesg may show more. _________________ Regards,
NeddySeagoon
Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail. |
|
Back to top |
|
|
Cygon Tux's lil' helper
Joined: 05 Feb 2006 Posts: 115 Location: Germany
|
Posted: Fri Oct 27, 2023 7:00 pm Post subject: |
|
|
I'm using the fuse version of the NTFS-3G driver.
Until the `emerge --update` yesterday evening, I have been using those NTFS RAID partitions quite thoroughly for development, video editing and games, 99.9% of the data stored on those drives was written through that NTFS implementation.
Kernel config:
Code: | #
# DOS/FAT/EXFAT/NT Filesystems
#
CONFIG_FAT_FS=y
# CONFIG_MSDOS_FS is not set
CONFIG_VFAT_FS=y
CONFIG_FAT_DEFAULT_CODEPAGE=437
CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-1"
# CONFIG_FAT_DEFAULT_UTF8 is not set
CONFIG_EXFAT_FS=y
CONFIG_EXFAT_DEFAULT_IOCHARSET="utf8"
# CONFIG_NTFS_FS is not set
# CONFIG_NTFS3_FS is not set
# end of DOS/FAT/EXFAT/NT Filesystems |
|
|
Back to top |
|
|
NeddySeagoon Administrator
Joined: 05 Jul 2003 Posts: 54799 Location: 56N 3W
|
Posted: Fri Oct 27, 2023 7:43 pm Post subject: |
|
|
Cygon,
Code: | # CONFIG_NTFS_FS is not set
# CONFIG_NTFS3_FS is not set |
Is correct for ntfs-3g bun the fstab mount option is not.
You need Code: | mount -t ntfs-3g ... |
Its possible to have all three drivers installed and pick one at mount time.
Test time.
Unmount one of your ntfs filesystems.
Remoust it manually with the option.
What happens now? _________________ Regards,
NeddySeagoon
Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail. |
|
Back to top |
|
|
Cygon Tux's lil' helper
Joined: 05 Feb 2006 Posts: 115 Location: Germany
|
Posted: Fri Oct 27, 2023 8:40 pm Post subject: |
|
|
I believe `ntfs` has been an alias for `ntfs-3g` nearly since the ancient ntfs kernel module was given up, but just to be sure, here's the test:
Okay, first, working mounts from the Gentoo Live USB image:
Code: | /dev/md124p3 on /run/media/gentoo/Archive type fuseblk (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096,uhelper=udisks2)
/dev/md124p2 on /run/media/gentoo/Devel type fuseblk (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096,uhelper=udisks2)
/dev/md124p1 on /run/media/gentoo/Games type fuseblk (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096,uhelper=udisks2) |
Now, my mounts as they happened according to the previously posted `/etc/fstab` snippet:
Code: | /dev/md126p3 on /srv/archive type fuseblk (rw,noatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096)
/dev/md126p2 on /srv/devel type fuseblk (rw,noatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096)
/dev/md126p1 on /srv/games type fuseblk (rw,noatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096) |
After remounting one partition with `# umount /srv/games` followed by `# mount -t ntfs-3g /dev/md126p1 /srv/games`:
Code: | /dev/md126p1 on /srv/games type fuseblk (rw,relatime,user_id=0,group_id=0,allow_other,blksize=4096) |
Then I copied a ~3 GiB file in the remounted `/srv/games/` partition and called `sync`. There were no visible writes in KSysGuard, the sync call is still hanging now, after waiting ~10 minutes. No change.
Also nothing printed to dmesg, again. I could try the ntfs3 driver, but I'm suspecting something else to be going on. |
|
Back to top |
|
|
Cygon Tux's lil' helper
Joined: 05 Feb 2006 Posts: 115 Location: Germany
|
Posted: Fri Oct 27, 2023 8:51 pm Post subject: |
|
|
I searched through my `.bash_history` and found that a few attempts to set up a "write-intent bitmap" for my RAID array intersected with the system update:
Code: | # cat ~/.bash_history | grep -i mdadm
mdadm
mdadm --grow --bitmap=internal /dev/md124
mdadm --grow --bitmap=/var/cache/md124-write-intent-bitmap /dev/md124
mdadm --grow --bitmap=internal /dev/md125
mdadm --grow --bitmap=internal /dev/md126
mdadm --grow --bitmap=internal /dev/md/SSDs_0
mdadm --grow --bitmap=internal /dev/md/SSDs_0p1 |
I wanted to reduce the resync time (I've had the occasional unexplained RAID resync, it happens once every few months).
All of the commands (including with an external write-intent bitmap) merely printed:
Code: | # mdadm --grow --bitmap=internal /dev/md124
mdadm: Cannot add bitmaps to sub-arrays yet |
So I assumed they all failed and did no change. I'm grasping at straws here, but perhaps there were some effects after all...
Update: no luck.
Code: | # mdadm --grow --bitmap=none /dev/md124
mdadm: no bitmap found on /dev/md124
tiamat /var/log # mdadm --grow --bitmap=none /dev/md125
mdadm: no bitmap found on /dev/md125
tiamat /var/log # mdadm --grow --bitmap=none /dev/md126
mdadm: no bitmap found on /dev/md126
tiamat /var/log # mdadm --grow --bitmap=none /dev/md/SSDs_0
mdadm: no bitmap found on /dev/md/SSDs_0
tiamat /var/log # mdadm --grow --bitmap=none /dev/md/SSDs_0p1
mdadm: no bitmap found on /dev/md/SSDs_0p1 |
Test repeated, still hangs. |
|
Back to top |
|
|
grknight Retired Dev
Joined: 20 Feb 2015 Posts: 2000
|
Posted: Fri Oct 27, 2023 10:43 pm Post subject: |
|
|
Cygon wrote: | Then I copied a ~3 GiB file in the remounted `/srv/games/` partition and called `sync`. There were no visible writes in KSysGuard, the sync call is still hanging now, after waiting ~10 minutes. No change. |
Did you check mdstat after this? 'auto-read-only' may trigger a rebuild on first write (and possibly sync) |
|
Back to top |
|
|
Cygon Tux's lil' helper
Joined: 05 Feb 2006 Posts: 115 Location: Germany
|
Posted: Sat Oct 28, 2023 6:14 am Post subject: |
|
|
Yes, I checked dmesg, /proc/mdstat and even the related processes each time. The array just remains in the 'normal' and 'clean' state.
I've had a few resyncs in the past, the file system performance is tolerable throughout and I was still able to cleanly shut down the system.
In this case, even the shutdown (both via KDE or via 'reboot' or 'poweroff' in a root shell) hangs forever, so I have no choice but to use the reset button. Thereafter (even if I waited 10-30 minutes before resetting), all changes on the RAID drives are gone and the arrays remain clean. Deleted files come back. File copies I did never happened, and so on. All changes I do seem to wait in the disk cache forever.
When I used the physical reset button in the past, it was a surefire method of having to sit through another RAID resync |
|
Back to top |
|
|
Cygon Tux's lil' helper
Joined: 05 Feb 2006 Posts: 115 Location: Germany
|
Posted: Sat Oct 28, 2023 8:48 am Post subject: |
|
|
Success! I finally got it to write again:
https://imgur.com/a/kk3VRGF
Unluckily, I did a few changes at once this time, without any testing inbetween:
- Extracted config.gz out of my month-old kernel and used that as a base
- Compiled gentoo-source-.6.1.57 from scratch (directory deleted, re-emerged, .config copied in)
- Built the NTFS3 code into the kernel image
- Explicitly used `ntfs3` in my `/etc/fstab`
- Rebuilt all packages that were emerged after the `~amd64` GCC release was emerged
Weirdly, the kernel now has md124+md127 for the SSD-based RAID-1 and md125+md126 for the old degraded HDD-based RAID-1.
Also, if I change back to `ntfs-3g` or even just `ntfs`, it still works.
dmesg when RAID drives were not writing:
Code: | [ 3.550098] md/raid1:md126: active with 2 out of 2 mirrors
[ 3.550113] md126: detected capacity change from 0 to 7784628224
[ 3.552416] md126: p1 p2 p3
[ 3.606017] md/raid1:md124: active with 1 out of 2 mirrors
[ 3.606055] md124: detected capacity change from 0 to 7650410496
[ 3.608875] md124: p1 p2 p3 |
dmesg from Gentoo Live USB image where they were working (device order is different):
Code: | [ 9.487873] md/raid1:md126: active with 1 out of 2 mirrors
[ 9.487888] md126: detected capacity change from 0 to 7650410496
[ 9.489964] md126: p1 p2 p3
[ 9.689943] md/raid1:md124: active with 2 out of 2 mirrors
[ 9.689964] md124: detected capacity change from 0 to 7784628224
[ 9.692119] md124: p1 p2 p3 |
dmesg from current system where RAID drives are writing (device order is different yet again):
Code: | [ 3.301147] md/raid1:md125: active with 1 out of 2 mirrors
[ 3.301161] md125: detected capacity change from 0 to 7650410496
[ 3.303336] md125: p1 p2 p3
[ 3.401401] md/raid1:md124: active with 2 out of 2 mirrors
[ 3.401414] md124: detected capacity change from 0 to 7784628224
[ 3.403134] md124: p1 p2 p3 |
Other than the change in the order of the `/dev/md*` devices, I can't find any difference.
It's a bit unsatisfying that I can't pinpoint the exact cause now. I'm happy to check more things if anyone is curious |
|
Back to top |
|
|
NeddySeagoon Administrator
Joined: 05 Jul 2003 Posts: 54799 Location: 56N 3W
|
Posted: Sat Oct 28, 2023 11:23 am Post subject: |
|
|
Cygon,
Compare
Code: | # cat /proc/mdstat
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md124 : active raid1 sdb[0]
3825205248 blocks super external:/md125/0 [2/1] [U_] |
With my Code: | cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md127 : active raid5 sdc3[0] sda3[2] sdd3[3] sdb3[1]
23441117184 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
bitmap: 1/59 pages [4KB], 65536KB chunk
|
Adding a write intent bitmap saves the pain of an entire sync. Only the regions flagged in the bitmap will be synced. _________________ Regards,
NeddySeagoon
Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail. |
|
Back to top |
|
|
Cygon Tux's lil' helper
Joined: 05 Feb 2006 Posts: 115 Location: Germany
|
Posted: Sat Oct 28, 2023 5:01 pm Post subject: |
|
|
I tried adding a write intent bitmap (see a few post up), it's not supported by mdadm for Intel's weird kind of raid arrays with needless sub-arrays, unfortunately.
The problem just returned. While the system was running and with no configuration changes of any kind.
It's still present after a reboot. So now I'm back at square one (and I'm using `ntfs3` for all NTFS partitions since this morning).
UPDATE: I also had a manually triggered RAID resync running when it happened. The resync went to 100% but `mdadm --examine` still said the array was "dirty.".
After booting into Windows, running the "Check Disk" function on all partitions w/"Repair" where it was offered, then rebooting back into Linux... NTFS partitions write again.
This is either just a coincidence and I've got only sporadic failures now - or - the NTFS3/NTFS-3G code has some kind of problem (version of the NTFS file system?) and is handling it in a very, very confusing way (if completely not processing a write and leaving it hanging is even something a Linux file system driver can do).
Last edited by Cygon on Sun Oct 29, 2023 8:31 am; edited 1 time in total |
|
Back to top |
|
|
Cygon Tux's lil' helper
Joined: 05 Feb 2006 Posts: 115 Location: Germany
|
Posted: Sun Oct 29, 2023 8:20 am Post subject: |
|
|
After being able to cleanly shut down my system yesterday, upon booting today, it started a resync (which did write to the underlying SSDs) but all changes on the file system level accumulated in the disk cache until I once more had to use the reset button.
Dual-booting into Windows shows no errors on the NTFS partitions.
It's beyond frustrating at this point.
What might be the best way of getting past this?
- Kill the Linux partition and start over from scratch with a clean install?
- Forget about the RAID and set up an rsync job
- Use my surplus proprietary hardware RAID controller (I think it's a MegaRAID SAS 9361-4i) - no idea about driver situation, worried that it may AES-encrypt drives, single (expensive) point of failure and so on)
- Switch the partitions to ext4 or another well-established file system (assuming the file system driver is even to blame)
If only that machine code bastard that is the cause of this would write the issue it's having into some log... somewhere. |
|
Back to top |
|
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|