These forums are locked and archived, but all topics have been migrated to the new forum. You can search for this topic on the new forum: Search for Cloudmin mount-system is asking for the filesystem type specification on the new forum.
One of our VPS became not bootable for some reason, so I am trying to mount it unfortunately with no much success:
root@ns1:/root#
cloudmin mount-system --host ns1.domain.tld
Mounting filesystem for nns1.domain.tld ..
.. failed : Failed to mount /home/servers/ns1.domain.tld.img on /mnt/kvm-ns1.domain.tld : mount: you must specify the filesystem type
cloudmin mount-system --host name
[--dir mount-point]
[--want-dir directory]
Then I tried:
root@ns1:/root#
cloudmin mount-system --host ns1.domain.tld --t ext3
Unknown parameter --t
Mounts the filesystem on the host for some system.
cloudmin mount-system --host name
[--dir mount-point]
[--want-dir directory]
Then I tried:
root@ns1:/root#
cloudmin mount-system --host ns1.domain.tld --type ext3
Unknown parameter --type
Mounts the filesystem on the host for some system.
cloudmin mount-system --host name
[--dir mount-point]
[--want-dir directory]
What is the right parameter? And will this way work at all or is the system is completely gone?
Howdy,
That does suggest that there's a problem of some sort...
If you attempt to boot your Virtual Machine, what output do you see if you access it via the "Graphical Console" option in Cloudmin?
-Eric
The problem is Graphical Console link is missing. When starting the VM it gives:
.. started, but a problem was detected : KVM instance was started, but could not be pinged after 60 seconds
I know it is really bad. But I really hope there is some kind of use of its .img file. I need to crack and get into it.
root@ns1:/home/servers#
cloudmin mount-system --host ns1.domain.tld
Mounting filesystem for ns1.domain.tld ..
.. failed : Failed to mount /home/servers/ns1.domain.tld.img on /mnt/kvm-ns1.domain.tld : mount: you must specify the filesystem type
root@ns1:/home/servers#
mount /home/servers/ns1.domain.tld /mnt/test -t ext3
mount: /home/servers/ns1.domain.tld is not a block device (maybe try `-o loop'?)
root@ns1:/home/servers#
mount /home/servers/ns1.domain.tld /mnt/test -t ext3 -o loop
loop: can't delete device /dev/loop1: Device or resource busy
mount: wrong fs type, bad option, bad superblock on /dev/loop1,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
I wonder what kind of other methods are there to try to open this file?
For some reason this system was missing .swap file, so I copied one from another VM (I know maybe it is not right, but I don't know how and where to obtain missing swap file) and rebooted. Finally I had Graphical Console, but it shows the system is not bootable as in the screenshot.
I also noticed working VMs show:
Current use Mounted on / as Linux EXT3
and the corrupted one istno mounted as ext3, it only shows:
Disk image format Whole disk
Well, the problem is that .img file is a "filesystem in a file".
And the errors you're seeing suggest that somehow, the filesystem became corrupted.
I'll talk to Jamie though and see if he has any additional thoughts on that.
One thought that comes to mind though -- if you log into the Cloudmin host, and run this command, what output do you see:
dmesg | tail -30
Also, did anything else happen around this time? A power outage or unexpected shutdown, for example?
-Eric
Thanks Eric,
We've already contacted Jamie and he said he will take a look at our system.
dmesg | tail -30 command is giving:
root@ns1:/root#
dmesg | tail -30
br0: port 14(tap12) entering disabled state
SELinux: 2048 avtab hash slots, 278061 rules.
SELinux: 2048 avtab hash slots, 278061 rules.
SELinux: 9 users, 12 roles, 3900 types, 205 bools, 1 sens, 1024 cats
SELinux: 81 classes, 278061 rules
EXT3-fs (loop0): error: can't find ext3 filesystem on dev loop0.
device tap12 entered promiscuous mode
br0: port 14(tap12) entering forwarding state
br0: port 14(tap12) entering disabled state
device tap12 left promiscuous mode
br0: port 14(tap12) entering disabled state
device tap12 entered promiscuous mode
br0: port 14(tap12) entering forwarding state
br0: port 14(tap12) entering disabled state
device tap12 left promiscuous mode
br0: port 14(tap12) entering disabled state
EXT3-fs (loop1): error: can't find ext3 filesystem on dev loop1.
EXT3-fs (loop1): error: can't find ext3 filesystem on dev loop1.
device tap12 entered promiscuous mode
br0: port 14(tap12) entering forwarding state
tap12: no IPv6 routers present
br0: port 14(tap12) entering forwarding state
br0: port 14(tap12) entering disabled state
device tap12 left promiscuous mode
br0: port 14(tap12) entering disabled state
EXT3-fs (loop2): error: can't find ext3 filesystem on dev loop2.
device tap12 entered promiscuous mode
br0: port 14(tap12) entering forwarding state
tap12: no IPv6 routers present
br0: port 14(tap12) entering forwarding state
There was no any outages or other unordinary events, besides other several VMs on this Cloudmin are running just fine. However I have a theory what might have happened.
This particular VM was initially created with hostname '000', to which Cloudmin adds the domain name part and it becomes '000.domain.tld'. But then later it was assigned a client domain name and the hostname in the Cloudmin interface had been changed to 'ns1.clientdomain.tld'.
Some time later, we created another VM with '000' hostname (just to keep it on top of the list of all other VMs), which didn't come up and caused 'ns1.clientdomain.tld' to go down, apparently because Cloudmin created files for two different VMS with similar names. And our mistake was to go ahead and delete new '000', I am afraid that swap file for the earlier client VM was deleted together with '000'.
If this was the cause, then it is a ver serious bug and Cloudmin must have some checks to avoid file name confusion in case if a user tries to add/delete two VMs with identical names.
Cloudmin should have prevented this by blocking creation of a VM with the same underlying name as an existing one ... but just in case I'll add a check to the next Cloudmin release to prevent a VM from being created that would over-write the disks of an existing one.
''