Hi everyone! I want to be able to access a folder inside the guest that corresponds to a cloud drive that is mounted inside the guest for security purposes. I have tried setting up a shared filesystem inside Virt-Manager (KVM) with virtiofs (following this tutorial: https://absprog.com/post/qemu-kvm-shared-folder) but as soon as I mount the folder in order for it to be accessible on the guest host the cloud drive gets unmounted. I guess a folder cannot have two mounts at the same time. Aliasing the folder using bind
and then sharing the aliased folder with the host doesn’t work either. The aliased folder is simply empty on the host.
Does anyone have an idea regarding how I might accomplish this? Is KVM the right choice or would something like docker
or podman
better suited for this job? Thank you.
Edit: To clarify:
The cloud drive is mounted inside a virtual machine for security purposes as the binary is proprietary and I do not want to mount it on the host (bwrap
and the like introduce a whole lot of problems, the drive doesn’t sync anymore and I have to relogin each time). I do not use the virtual machine per se, I just start it and leave it be.
fwiw: if you go w the container strategy with docker or podman, you should be able to use the storage overlay based on how i’m reading your question.
it’s hard to ascertain any path forward w/o knowing more details on the cloud drive and how’s it’s currently mounted on the guest instance.
I have no idea how it is mounted (how can I find out?) because the binary is proprietary. This is why it is contained inside a virtual machine.
run the command
mount
with sudo access and if you can see it enumerated in the printout then you should be able to proceed with either a container overlay or separate mount point.if not, then it’ll get very advanced very quickly; do you know how to use
strace
?I just checked and it is mounted as a
fuse
drive.do you know how to use strace?
A very confident NO :)
fortunately we won’t have to bother w strace; but i think i can see where you’ll be blocked.
do you have to provide a username/password or token when you try to access the drive now?
if yes, then you should be able to mount it like you’re trying to do using instructions like these and you can use the information from the last printout to fill in the blanks.
if no, then its access is controlled outside of your guest instance and you’ll need to ask your admins to enable access.
do you have to provide a username/password or token when you try to access the drive now?
I do but it’s through the proprietary GUI of the binary which has no CLI or API I can use.
then strace might help if we’re lucky enough to get something like memory addresses.
strace can be very verbose and requires a lot of knowledge that i doubt i can share through comments back and forth.
is creating an intermediary like others have commented on in this post an option? they’re automatically easier and faster than strace and there’s no gaurantee that strace will show us the information we need.
strace can be very verbose and requires a lot of knowledge that i doubt i can share through comments back and forth.
No worries. Thank a lot nonetheless.
is creating an intermediary like others have commented on in this post an option?
What do you mean by intermediary? Do you mean syncing the files with the VM and then sharing the synced copy with the host?That wouldn’t work since my drive is smaller than the cloud drive and I need all the files on-demand.
Does rclone support the cloud service?
It does not, hence my question.
Gotcha, in that case maybe a container? You can use a bind mount to link a folder on the host to inside the container. You could use docker/podman or LXC.
This is what I have been trying for the past two days actually: https://lemmy.ml/post/22215540 Could you please assist me there if you have an idea? Thanks :)
Use something like SAMBA to share files between the two systems
I think NFS would be a better choice if I decide to go that route. Isn’t SAMBA slower and older than NFS?
Wouldn’t you just be able to create a folder for Xdrive (imaginary alternative to Google drive) in the Virtual Machine and another one in the host.
Since they are both synchronized with Xdrive they would have the same content.
The cloud drive is mounted inside a virtual machine for security purposes as the binary is proprietary and I do not want to mount it on the host (
bwrap
and the like introduce a whole lot of problems, the drive doesn’t sync anymore and I have to relogin each time). I do not use the virtual machine per se, I just start it and leave it be.
The best option would be to have a “regular” client that keeps a local copy in sync with the cloud instead of a mount.
BTW: IDK what cloud storage you are using, but IIRC some show files that are not available locally (ie. only the most recent files are downloaded locally - the older stuff is downloaded on request).
Alternatively, you could hack something together running unison locally in the guest to sync the cloud folder to a shared one… you’ll have two copies of the data though.
That would be impossible since the cloud drive is 2TB and my physical storage space is under 500GB in size.
Maybe reshare the directory locally through Samba on your VM?
Why not NFS? Regardless, wouldn’t it be slower anyway compared to
virtiofs
?Just throwing it out there as an option. Good luck.
Thank you!
I don’t understand what you mean with the content disappearing when you mount the virtiofs on the guest - isn’t the mount empty when bound, untill the guest populates it?
Can you share what sync client+guest os you are using? if the client does “advanced” features like files on demand, then it might clash with virtiofs - this is where the details of which client/OS could be relevant, does it require local storage or support remote?
If guest os is windows, samba share it to the host. if guest os is linux, nfs will probably do. In both cases I would host the share on the client, unless the client specifically supports remote storage.
podman/docker seems to be the proper tool for you here, but a VM with the samba/nfs approach could be less hassle and less complicated, but somewhat bloaty. containers require some more tailoring but in theory is the right way to go.
Keep in mind that a screwup could be interpreted by the sync client as mass-deletes, so backups are important (as a rule of thumb, it always is, but especially for cloud hosted storage)
I don’t understand what you mean with the content disappearing when you mount the virtiofs on the guest - isn’t the mount empty when bound, untill the guest populates it?
Sorry I made a mistake in the original post. I wanted to say on the host instead of on the guest. My bad.
Yes, you are correct, the folder is empty until I log in insde the cloud application on the guest.
does it require local storage or support remote?
What do you mean? The cloud drive is a network drive basically. It only downloads files on demand.
if guest os is linux, nfs will probably do
This is what others have suggested and what I will probably do if the method below fails.
podman/docker seems to be the proper tool for you here
Yesterday I actually tried to spin a
podman
container hoping it would work but I encountered the following problem when trying to propagate mounts: https://lemmy.ml/post/22215540Could you please assist me there if you have further ideas? Thank you :)
Keep in mind that a screwup could be interpreted by the sync client as mass-deletes
I am VERY aware of this *sweating*
9p server