Discussion:
USB storage detachment / reattachment
Josef Söntgen
2016-12-23 09:14:23 UTC
Permalink
Hello Martijn,
For the latter finding, I am aware that the usb driver supports a
policy mechanism for raw devices (in combination with the
usb_report_filter component). But to my knowledge for storage devices,
such a policy mechanism does not exist, right?
FWIW, there is an USB block storage driver [1] that uses the Usb raw
session and can be used instead of the in-built storage driver of the
usb_drv. A custom runtime/management component could monitor the
usb_drv device report and spawn the whole stack if it detects a USB
storage device. The usb_drv's device report does not contain the device
class so far though but adding that to the report is easy.

[1] repos/os/src/drivers/usb_block


Regards
Josef
--
Josef Söntgen
Genode Labs

http://www.genode-labs.com/ · http://genode.org/
Norman Feske
2016-12-23 08:11:16 UTC
Permalink
Hi Martijn,
- When I remove the USB stick, the usb driver detects removal, but
the rump_fs remains unaware. The CLI component can successfully open
new file system sessions and even list the files in the root
directory, even though the actual storage device is detached...
this is where the problem begins. Unlike the NIC session, neither the
block session nor the file-system session has any notion of unplugging
devices. Once connected, a client expects the session to be available
until it is closed.
- The rump_fs server aborts when a filesystem is not of the expected
type.
I think this is the adequate behavior in this situation. From the
file-system's perspective, this is fatal condition.
- To complicate matters more, the target platform is booted from a -
different - USB stick. Currently the usb driver detects this USB
stick as mass storage device and the rump_fs aborts because the fs is
not the expected ext2fs.
What you describe is the general case of using hot-swappable storage. To
build a system that works, we need to anticipate the fact that storage
sizes and file-system types may differ. The system must still stay robust.
For the latter finding, I am aware that the usb driver supports a
policy mechanism for raw devices (in combination with the
usb_report_filter component). But to my knowledge for storage
devices, such a policy mechanism does not exist, right?
Our mid-term goal is to remove the build-in storage/HID/networking
support from the USB driver and move this functionality into dedicated
components that use the USB-session interface. This will make us much
more flexible because the policy configuration can then be used to
explicitly assign devices to a clients. Right now, the USB driver's
built-in policy provides the first storage device as a block session.
This is quite limiting. I.e., there is no good way to access multiple
storage devices at the same time.
Regarding detachment / reattachment of USB storage, I understand that
at startup of this composition, the rump_fs server immediately
requests a block session at the part_blk server, which in turn
requests a block session at the usb driver. This whole blocks until a
USB storage device is plugged in. When this happens, the chain of
sessions requests is setup and the file system client can access the
medium. Now if the USB storage device is detached, what happens to
the open sessions?
They ultimately fail. From their perspective, the situation is not
different from a just-died hard disk.

To implement your scenario, we need to come up with an protocol that
takes care of orderly closing the sessions before the medium disappears.

1. We need to tell the client to release the file-system session, e.g.,
via a config update or by killing the client. Once, the client
complied (or ceased to exist),
2. We need to tell the (now client-less) file-system server to close
the block session. In principle, we could just kill it since it
has no client anyway. But in practice, we want to make sure that
the file system writes back the content of its block cache before
closing the block session. Once, the file-system server is gone,
3. We need to tell part_blk to release the block session at the
driver, or kill it. Once part_blk is gone,
4. There is no longer a block client at the USB driver. So we can
remove the USB stick. The next time a client connects, it will
perform the regular procedure that worked for the first time.

As of now, Genode provides no established solution to realize such a
protocol. The dynamic init that I outlined my road-map posting will make
such scenarios much easier to implement. But until it is ready, I am
afraid that you will need to implement it in the form of a custom runtime.
As a way to support detachment / reattachment of USB storage I’m
thinking about placing the rump_fs and part_blk components in a child
subtree of the CLI component that is spawned on demand and cleaned-up
after use. But this seems a bit like overkill.
That's exactly the right solution. I don't think that it's overkill
either. Spawning up rump_fs and part_blk dynamically is certainly quick
enough. Memory-wise, it does not take more resources that a static
scenario either. By letting your CLI component implement the protocol
outlined above, you have full control over chain of events. Also the
aborting rump_fs is nothing fatal anymore but can be gracefully handled
by the CLI component. As another benefit, the solution does not need us
to supplement the notion of hot-plugging to the file-system and block
session interfaces, which would otherwise inflate the complexity of
these interfaces (and thereby all the clients that rely on them).

Cheers
Norman
--
Dr.-Ing. Norman Feske
Genode Labs

http://www.genode-labs.com · http://genode.org

Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden
Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth
Martijn Verschoor
2016-12-23 15:37:14 UTC
Permalink
Hi Norman, Josef,

Thanks for your feedback and useful tips. It is clear to me now how to approach this.

@all: happy holidays!

Met vriendelijke groet / kind regards,

Martijn Verschoor

Cyber Security Labs B.V. | Gooimeer 6-31 | 1411 DD Naarden | The Netherlands
+31 35 631 3253 (office) | +31 616 014 087 (mobile)
Boris Mulder
2017-01-12 14:47:46 UTC
Permalink
Dear Genode developers,
Post by Norman Feske
As a way to support detachment / reattachment of USB storage I’m
thinking about placing the rump_fs and part_blk components in a child
subtree of the CLI component that is spawned on demand and cleaned-up
after use. But this seems a bit like overkill.
That's exactly the right solution. I don't think that it's overkill
either. Spawning up rump_fs and part_blk dynamically is certainly quick
enough. Memory-wise, it does not take more resources that a static
scenario either. By letting your CLI component implement the protocol
outlined above, you have full control over chain of events. Also the
aborting rump_fs is nothing fatal anymore but can be gracefully handled
by the CLI component. As another benefit, the solution does not need us
to supplement the notion of hot-plugging to the file-system and block
session interfaces, which would otherwise inflate the complexity of
these interfaces (and thereby all the clients that rely on them).
Martijn and I have been thinking of a way to implement this, and came to
the conclusion that instead of spawning the stack as children of the CLI
component, it might be better to use a new management component
Post by Norman Feske
A custom runtime/management component could monitor the
usb_drv device report and spawn the whole stack if it detects a USB
storage device. The usb_drv's device report does not contain the device
class so far though but adding that to the report is easy.
This is exactly what we're trying to do now. We want to create a custom
component called "media" that monitors usb devices by reading the
report. It provides a service to other components through which they can
request a filesystem session in order to read-write from/to the usb-stick. For
this, it spawns the part_blk and rump_fs components as children if the
usb is plugged in, and kills them once the usb is plugged out. It
roughly looks like this:

rump_fs part_blk

| |

CLI media USB_drv

| | |

init

But this raises a few questions. First, the filesystem interface needs
to be presented to the client somehow. To avoid adding another layer of
indirection into media, essentially duplicating rump_fs's entire API, we
would like the client (in this case CLI) to be directly connected to
rump_fs. The client can then ask media if the USB is connected before
calling a function from rump_fs.

However, this means that rump_fs provides a service, announces it to its
parent (media), and media has to decide what to do with that announce.
It can implement rump_fs as a slave, but that way the entire API needs
to be copied into media so media can present it as its own service to the client.

So we would like media to announce the filesystem service to its parent
(in this case init), so any client can use this service. In the same
way, any session request will be passed from CLI to its parent (init),
init has to pass it to media, and media passes it to rump_fs. However,
the current implementation and specification of genode does not allow
services of any server to exist in any level above the server's parent.
Services can only be provided to direct parents, and to other components
in the parent's subtree. Therefore, copying the API from the child to
the parent seems unavoidable.

Another problem that pops up is that media has to spawn all these
subcomponents as children. In order to route block session requests from
rump-fs to part-blk, media needs to implement some routing policy and
effectively serves the same role for these two components as init serves
for the system. So we could:

1. Copy all necessary code for routing from init to media (which is
almost all code if we want to be generic).

2. Let media spawn another init child component (let's call it sub-init
for now) which in turn spawns rump-fs and part-blk and does the routing.

To us, the second option seems much more clean as it involves no
code-copying. However, services announced by rump-fs can not be used by
other components that are not children of the new init, and are kind of
useless. Their announcements can not be passed on to the parents,
leaving us with the same problem as we had with rump_fs but with the
additional problem that even if there would be a custom way to forward
service announces and requests to the parent/child respectively,
sub-init has no such policy, and this functionality has to be included
in sub-init's code as well, adding a lot of complexity.

Eventually both cases bottle down to the same problem: service
announcements to a parent cannot automatically be forwarded to that
parent's parent, and likewise, service requests need to be able to be
delegated to children of children without a lot of hassle. The only option if I'm correct is implementing this functionality manually, but this does not work if the parent is an existing component that does not support it.

Is there a reason this is never done? For init it is clear that it would
never pass an announce to its parent (usually core) or receive session
requests from it. But how about the general case?

And how should we solve cases such as the above scenario?

kind regards,

Boris Mulder

Cyber Security Labs B.V. | Gooimeer 6-31 | 1411 DD Naarden | The Netherlands
+31 35 631 3253 (office)
Norman Feske
2017-01-13 10:32:31 UTC
Permalink
Hello Boris,

welcome to the mailing list and thank you for the elaborate description
of your scenario and approach.

As a side note, the discussion reminds me of a very similar problem we
addressed some years ago:

http://genode.org/documentation/release-notes/12.02#Device_drivers

Unfortunately, we removed the described d3m component later on because
it turned out to be not as flexible as we hoped for. However, on the
positive side, scenarios like your's are not completely alien to Genode. ;-)
Post by Boris Mulder
This is exactly what we're trying to do now. We want to create a custom
component called "media" that monitors usb devices by reading the
report. It provides a service to other components through which they can
request a filesystem session in order to read-write from/to the usb-stick. For
this, it spawns the part_blk and rump_fs components as children if the
usb is plugged in, and kills them once the usb is plugged out. It
rump_fs part_blk
| |
CLI media USB_drv
| | |
init
This looks very good to me.
Post by Boris Mulder
But this raises a few questions. First, the filesystem interface needs
to be presented to the client somehow. To avoid adding another layer of
indirection into media, essentially duplicating rump_fs's entire API, we
would like the client (in this case CLI) to be directly connected to
rump_fs. The client can then ask media if the USB is connected before
calling a function from rump_fs.
You are right that wrapping the 'File_system' interface would be
cumbersome. In your case, it is better to let CLI use the
rump_fs-provided session directly. This can be achieved by letting the
media component pass the session capability as obtained from rump_fs to
its parent (init). So CLI would use the rump_fs session directly.
Post by Boris Mulder
However, this means that rump_fs provides a service, announces it to its
parent (media), and media has to decide what to do with that announce.
It can implement rump_fs as a slave, but that way the entire API needs
to be copied into media so media can present it as its own service to the client.
You are already on the right track. Running rump_fs as a slave is good.
You just missed a tiny piece of the puzzle: The 'Slave::Connection' does
not only provide the session interface of the slave's service but also
the corresponding 'Session_capability' (it inherits
'CONNECTION::Client', so the 'Slave::Connection' _is_ a session
capability). Instead of calling the 'File_system' methods, the media
component would pass this 'Session_capability' to init as response to
the 'File_system' session request that originated from init.
Post by Boris Mulder
Services can only be provided to direct parents, and to other components
in the parent's subtree. Therefore, copying the API from the child to
the parent seems unavoidable.
There is no such limitation. But you are right that the use case has
been so rare that it is near to impossible to find examples in Genode's
source tree. The above mentioned d3m was such an example. Other examples
are the GDB monitor (however, here we temporarily removed the feature to
run Genode services within GDB monitor).
Post by Boris Mulder
Another problem that pops up is that media has to spawn all these
subcomponents as children. In order to route block session requests from
rump-fs to part-blk, media needs to implement some routing policy and
effectively serves the same role for these two components as init serves
1. Copy all necessary code for routing from init to media (which is
almost all code if we want to be generic).
2. Let media spawn another init child component (let's call it sub-init
for now) which in turn spawns rump-fs and part-blk and does the routing.
To us, the second option seems much more clean as it involves no
code-copying. However, services announced by rump-fs can not be used by
other components that are not children of the new init, and are kind of
useless. Their announcements can not be passed on to the parents,
leaving us with the same problem as we had with rump_fs but with the
additional problem that even if there would be a custom way to forward
service announces and requests to the parent/child respectively,
sub-init has no such policy, and this functionality has to be included
in sub-init's code as well, adding a lot of complexity.
I agree with everything you said. Until Genode 16.11 is was not
reasonable for init to forward session requests to its children because
of the synchronous nature of the parent interface. Now that we revised
this interface to work asynchronously [1], we can move forward and add
this feature to init. Indeed, I plan to add it along with the dynamic
reconfiguability of init in the near-term future (as outlined in my
original road-map posting [2]). With the new version of init, scenarios
like your's will become pretty straight-forward to realize.

[1]
http://genode.org/documentation/release-notes/16.11#Asynchronous_parent-child_interactions
[2]
https://sourceforge.net/p/genode/mailman/genode-main/thread/585A6FE2.1060800%40genode-labs.com/#msg35563593
Post by Boris Mulder
And how should we solve cases such as the above scenario?
In the not-too-distant future, your case should be well covered by init,
alleviating the need to implement a custom runtime component. In the
meantime, I recommend you to follow the slave approach described above
(forwarding the session capability of the 'Slave::Connection' to init).

I would be very interested to hear how this turns out. Should my above
description remain too vague or leave your questions unanswered, please
don't hesitate to get back to me.

Cheers
Norman
--
Dr.-Ing. Norman Feske
Genode Labs

http://www.genode-labs.com · http://genode.org

Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden
Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth
Boris Mulder
2017-01-27 11:00:04 UTC
Permalink
I have been looking into your suggestions and I have some questions
about it.
Post by Norman Feske
You are already on the right track. Running rump_fs as a slave is good.
You just missed a tiny piece of the puzzle: The 'Slave::Connection' does
not only provide the session interface of the slave's service but also
the corresponding 'Session_capability' (it inherits
'CONNECTION::Client', so the 'Slave::Connection' _is_ a session
capability). Instead of calling the 'File_system' methods, the media
component would pass this 'Session_capability' to init as response to
the 'File_system' session request that originated from init.
I assume here the session() method inherited from Genode::Root has to be
implemented such that it returns the capability that is the
Slave::Connection after that connection has been initiated?
Post by Norman Feske
I have personally done some work related to this issue. First off, I
would suggest adding code to allow init to share child services with
its parent. I also have a service_router component that I wrote. You
may not be able to use it directly, but feel free to take some of the
https://github.com/NobodyIII/genode/tree/master/repos/os/src/server/service_router
The code is a bit messy, so any help on making it ready to merge into
the official Genode repo would be very welcome.
Here, you create a Forwarded_capability struct, which wraps a session
capability. It inherits from Id_space<Parent::Client>::Element. Why if I
may ask? Do I need to do that too?

It eventually invokes env.session() to create a new capability for the
forwarded service. Why does it not get its capability from the server,
but instead seems to create a new session for a certain service? It
seems to me that the Service router does not forward capabilities from
children, or am I wrong? Does the cap live somewhere else?

I'm missing the picture a bit here. Can you explain how it works with
those capabilities?
--
Met vriendelijke groet / kind regards,

Boris Mulder

Cyber Security Labs B.V. | Gooimeer 6-31 | 1411 DD Naarden | The Netherlands
+31 35 631 3253 (office)
Norman Feske
2017-01-30 10:09:52 UTC
Permalink
Hi Boris,
Post by Boris Mulder
Post by Norman Feske
You just missed a tiny piece of the puzzle: The 'Slave::Connection' does
not only provide the session interface of the slave's service but also
the corresponding 'Session_capability' (it inherits
'CONNECTION::Client', so the 'Slave::Connection' _is_ a session
capability). Instead of calling the 'File_system' methods, the media
component would pass this 'Session_capability' to init as response to
the 'File_system' session request that originated from init.
I assume here the session() method inherited from Genode::Root has to be
implemented such that it returns the capability that is the
Slave::Connection after that connection has been initiated?
yes.

Cheers
Norman
--
Dr.-Ing. Norman Feske
Genode Labs

http://www.genode-labs.com · http://genode.org

Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden
Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth
Boris Mulder
2017-01-31 12:13:57 UTC
Permalink
All right, so far, the forwarding of sessions works. However, when
closing a session, there is an issue.

Whenever a client connection is closed, the client calls close() with a
session cap on the root. The root then has to look into its open
sessions, and compare the session caps of each of those open sessions
with the provided cap, and then further cleans up all data related to
that session.

For the service router example, it does the following on line 52
(service_router/main.cc):

for (Forwarded_capability *cap = _caps.first(); cap; cap = cap->next()) {
if (*cap == session)
return cap;
}

it checks if these capabilities are equal using the '==' operator. In
Capability, this operator compares the internal pointers
Native_capability::Data *_data of each Capability object, which points
to an object containing metadata such as a Rpc destination and a key.

However, when this session capability is passed as argument to the
close() or upgrade() method of the root RPC interface, the unmarshaller
at the server side will always create a new Capability object with new
data using the Capability_space_tpl::import method (If I am not
mistaken), instead of using lookup(). This is done for instance on linux
and on nova in ipc.cc. Therefore the cap pointers will never be equal
although they point to different duplicate cap data objects with the
same content. Is this the correct behaviour?

When testing it with print() by inserting the following line

log("testing... session = ", session, " cap = ", cap, " equal = ", session == cap);

it outputs the following:

session = cap<socket=27,key=474> cap = cap<socket=27,key=474> equal = 0

So the comparison will always fail, and the overloaded close() and
upgrade() methods of Root cannot close/upgrade the correct session.

Am I missing something here or is it not possible right now to locally
keep track of multiple forwarded session capabilities in this way?

Or is there a workaround?

Regards,

Boris
Post by Norman Feske
Hi Boris,
Post by Boris Mulder
Post by Norman Feske
You just missed a tiny piece of the puzzle: The 'Slave::Connection' does
not only provide the session interface of the slave's service but also
the corresponding 'Session_capability' (it inherits
'CONNECTION::Client', so the 'Slave::Connection' _is_ a session
capability). Instead of calling the 'File_system' methods, the media
component would pass this 'Session_capability' to init as response to
the 'File_system' session request that originated from init.
I assume here the session() method inherited from Genode::Root has to be
implemented such that it returns the capability that is the
Slave::Connection after that connection has been initiated?
yes.
Cheers
Norman
--
Met vriendelijke groet / kind regards,

Boris Mulder

Cyber Security Labs B.V. | Gooimeer 6-31 | 1411 DD Naarden | The Netherlands
+31 35 631 3253 (office)
Norman Feske
2017-01-31 15:30:40 UTC
Permalink
Hello Boris,
Post by Boris Mulder
When testing it with print() by inserting the following line
log("testing... session = ", session, " cap = ", cap, " equal = ", session == cap);
session = cap<socket=27,key=474> cap = cap<socket=27,key=474> equal = 0
So the comparison will always fail, and the overloaded close() and
upgrade() methods of Root cannot close/upgrade the correct session.
Am I missing something here or is it not possible right now to locally
keep track of multiple forwarded session capabilities in this way?
the kernel mechanisms for re-identifying capabilities vary a lot between
the various kernels. For example, for seL4 I brought up this problem
long ago [1] but there is still no good solution. On NOVA, the situation
looks a bit brighter since we extended the kernel in this respect. In
base-hw, it works.

[1] http://sel4.systems/pipermail/devel/2014-November/000114.html

For your current scenario, I recommend you to change the comparison to

session.local_name() == cap.local_name()

The 'local_name' corresponds to the 'key' you observe in the output of
the capability. It is expected to be unique for the corresponding RPC
object.

In the longer term, we try to largely eliminate the need to re-identify
capabilities. In particular since Genode 16.11 [2], the interplay
between parent and child components no longer relies on the
re-identification of capabilities. It employs IDs instead. In fact,
under the hood, there are no 'Root' RPC calls between components any
more. But at the API level, we have not made the new facilities
available yet. For now, I recommend you to use the 'local_name', or the
'Object_pool', which is a data structure that associates capabilities
with a component-local object.

[2]
http://genode.org/documentation/release-notes/16.11#Asynchronous_parent-child_interactions

Cheers
Norman
--
Dr.-Ing. Norman Feske
Genode Labs

http://www.genode-labs.com · http://genode.org

Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden
Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth
Boris Mulder
2017-01-31 15:47:10 UTC
Permalink
Thanks, that solves it for now.

Boris
Post by Norman Feske
Hello Boris,
Post by Boris Mulder
When testing it with print() by inserting the following line
log("testing... session = ", session, " cap = ", cap, " equal = ", session == cap);
session = cap<socket=27,key=474> cap = cap<socket=27,key=474> equal = 0
So the comparison will always fail, and the overloaded close() and
upgrade() methods of Root cannot close/upgrade the correct session.
Am I missing something here or is it not possible right now to locally
keep track of multiple forwarded session capabilities in this way?
the kernel mechanisms for re-identifying capabilities vary a lot between
the various kernels. For example, for seL4 I brought up this problem
long ago [1] but there is still no good solution. On NOVA, the situation
looks a bit brighter since we extended the kernel in this respect. In
base-hw, it works.
[1] http://sel4.systems/pipermail/devel/2014-November/000114.html
For your current scenario, I recommend you to change the comparison to
session.local_name() == cap.local_name()
The 'local_name' corresponds to the 'key' you observe in the output of
the capability. It is expected to be unique for the corresponding RPC
object.
In the longer term, we try to largely eliminate the need to re-identify
capabilities. In particular since Genode 16.11 [2], the interplay
between parent and child components no longer relies on the
re-identification of capabilities. It employs IDs instead. In fact,
under the hood, there are no 'Root' RPC calls between components any
more. But at the API level, we have not made the new facilities
available yet. For now, I recommend you to use the 'local_name', or the
'Object_pool', which is a data structure that associates capabilities
with a component-local object.
[2]
http://genode.org/documentation/release-notes/16.11#Asynchronous_parent-child_interactions
Cheers
Norman
--
Met vriendelijke groet / kind regards,

Boris Mulder

Cyber Security Labs B.V. | Gooimeer 6-31 | 1411 DD Naarden | The Netherlands
+31 35 631 3253 (office)
Boris Mulder
2017-02-10 13:12:18 UTC
Permalink
Hi, I've stumbled upon a bit of a problem when using the usb driver
Post by Josef Söntgen
FWIW, there is an USB block storage driver [1] that uses the Usb raw
session and can be used instead of the in-built storage driver of the
usb_drv. A custom runtime/management component could monitor the
usb_drv device report and spawn the whole stack if it detects a USB
storage device. The usb_drv's device report does not contain the device
class so far though but adding that to the report is easy.
Now, I'm spawning this usb block driver dynamically, which then tries to
connect to the usb driver. In my scenario, the usb driver is found, but
at some point the usb_block just hangs at the first time it reaches the
line:

iface.bulk_transfer(p, ep, block, &c);

(usb_block/main.cc line 308, called from line 432 as I verified with
print statements in the code)

the bulk_transfer method (with block=true) blocks indefinitely.

I do not know what causes this. I think it might be the usb interface
specified to the config of usb_block. The config passed to usb_block
looks like this:

<config label="usb-3-1" report="yes" writeable="yes" interface="0"
lun="0" />

Where usb-3-1 is the correct device label. Omitting the interface and
lun fields from the config gives the same error. the usb driver config
(which happens to be generated by a usb_report_filter) looks like this:

<config uhci="yes" ehci="yes" xhci="yes">
<hid/>
<raw>
<report devices="yes"/>
<policy label="media -> usb_blk -> usb-1-3" vendor_id="0x058f" product_id="0x6387" bus="0x0001" dev="0x0003"/>
</raw>
</config>

Since the usb block driver gives the "Device plugged" message, I'd say
it is not because it cannot find the right device or driver or anything.
The config of the usb driver also allows usb_block to start a usb
session (otherwise the policy parser would have thrown an error).

Besides this, the usb driver gives "Could not read string descriptor
index: 0" warnings somewhere inside initialize() at line 365 of
usb_block. Otherwise, nothing is to be seen. This makes it look like the
usb_block is connected to the usb driver. Besides, no other components
are providing the usb service in my scenario. I am sure the usb stick
itself is formatted properly (it has worked for another scenario and in
linux as well).

Can anybody help me with this?
--
Met vriendelijke groet / kind regards,

Boris Mulder

Cyber Security Labs B.V. | Gooimeer 6-31 | 1411 DD Naarden | The Netherlands
+31 35 631 3253 (office)
Josef Söntgen
2017-02-10 17:23:02 UTC
Permalink
Hello Boris,
Post by Boris Mulder
Now, I'm spawning this usb block driver dynamically, which then tries to
connect to the usb driver. In my scenario, the usb driver is found, but
at some point the usb_block just hangs at the first time it reaches the
iface.bulk_transfer(p, ep, block, &c);
(usb_block/main.cc line 308, called from line 432 as I verified with
print statements in the code)
the bulk_transfer method (with block=true) blocks indefinitely.
It looks like the INQUIRY command does not complete; I already observed
this behaviour with a Delock USB SATA adapter. When using a HDD, we
might need to issue a START STOP UNIT command to get the device into
working state before executing any other command but I did not look
into that so far.
Post by Boris Mulder
<config uhci="yes" ehci="yes" xhci="yes">
<hid/>
<raw>
<report devices="yes"/>
<policy label="media -> usb_blk -> usb-1-3" vendor_id="0x058f" product_id="0x6387" bus="0x0001" dev="0x0003"/>
</raw>
</config>
That being said, judging by the vendor and product id, you are using a
Transcend USB stick. We had problems with such sticks in past, even
when using the usb_drv's in-build storage driver. So could you please
try using another vendor, just to make sure, that it is indeed the
combination of stick and driver that does not work.

It is mostly likely that we might not wait long enough in the usb_block
driver for the device to get itself into a working state or that we
do not do all of the configuration necessary, i.e., appyling quirks
and stuff, to get it there.


Regards
Josef
--
Josef Söntgen
Genode Labs

http://www.genode-labs.com/ · http://genode.org/
Boris Mulder
2017-02-13 09:36:50 UTC
Permalink
all right, now I'm using a sandisk usb (<device vendor_id="0x0781"
product_id="0x5591"/>), and it does not give this error. However, when I
try to list the files and their contents in the root directory using
File_system::read, it gives a bunch of other errors:

[init -> media] child "rump_fs1" announces service "File_system"

(here it calls dir() )

[init -> media -> usb_blk] Error: complete error: packet not succeded
[init -> media -> usb_blk] Error: request pending: tag: 5 read: 0
buffer: 0x406800 lba: 7423 size: 4096

(here it calls File_system::read() )

[init -> media -> usb_blk] Error: complete error: packet not succeded
[init -> media -> usb_blk] Error: request pending: tag: 6 read: 1
buffer: 0x406800 lba: 11511 size: 4096
[init -> media -> usb_blk] Error: complete error: packet not succeded
[init -> media -> usb_blk] Error: request pending: tag: 7 read: 1
buffer: 0x406800 lba: 11511 size: 4096

and from here these errors keep coming infinitely (with the same error
code, except for the tag which is advancing). It looks like it retries
to get some packet all the time without stopping.

apparently, it does see the dir_handle returned by dir("/") as valid
(the valid() check succeeds).

Any ideas as to which causes this?
Post by Norman Feske
Hello Boris,
Post by Boris Mulder
Now, I'm spawning this usb block driver dynamically, which then tries to
connect to the usb driver. In my scenario, the usb driver is found, but
at some point the usb_block just hangs at the first time it reaches the
iface.bulk_transfer(p, ep, block, &c);
(usb_block/main.cc line 308, called from line 432 as I verified with
print statements in the code)
the bulk_transfer method (with block=true) blocks indefinitely.
It looks like the INQUIRY command does not complete; I already observed
this behaviour with a Delock USB SATA adapter. When using a HDD, we
might need to issue a START STOP UNIT command to get the device into
working state before executing any other command but I did not look
into that so far.
Post by Boris Mulder
<config uhci="yes" ehci="yes" xhci="yes">
<hid/>
<raw>
<report devices="yes"/>
<policy label="media -> usb_blk -> usb-1-3" vendor_id="0x058f" product_id="0x6387" bus="0x0001" dev="0x0003"/>
</raw>
</config>
That being said, judging by the vendor and product id, you are using a
Transcend USB stick. We had problems with such sticks in past, even
when using the usb_drv's in-build storage driver. So could you please
try using another vendor, just to make sure, that it is indeed the
combination of stick and driver that does not work.
It is mostly likely that we might not wait long enough in the usb_block
driver for the device to get itself into a working state or that we
do not do all of the configuration necessary, i.e., appyling quirks
and stuff, to get it there.
Regards
Josef
--
Met vriendelijke groet / kind regards,

Boris Mulder

Cyber Security Labs B.V. | Gooimeer 6-31 | 1411 DD Naarden | The Netherlands
+31 35 631 3253 (office)
Nobody III
2017-01-14 19:58:11 UTC
Permalink
I have personally done some work related to this issue. First off, I would
suggest adding code to allow init to share child services with its parent.
I also have a service_router component that I wrote. You may not be able to
use it directly, but feel free to take some of the code to use in your
media component. Here's a link to the code:
https://github.com/NobodyIII/genode/tree/master/repos/os/src/server/service_router

The code is a bit messy, so any help on making it ready to merge into the
official Genode repo would be very welcome.
Post by Boris Mulder
Dear Genode developers,
Post by Norman Feske
As a way to support detachment / reattachment of USB storage I’m
thinking about placing the rump_fs and part_blk components in a child
subtree of the CLI component that is spawned on demand and cleaned-up
after use. But this seems a bit like overkill.
That's exactly the right solution. I don't think that it's overkill
either. Spawning up rump_fs and part_blk dynamically is certainly quick
enough. Memory-wise, it does not take more resources that a static
scenario either. By letting your CLI component implement the protocol
outlined above, you have full control over chain of events. Also the
aborting rump_fs is nothing fatal anymore but can be gracefully handled
by the CLI component. As another benefit, the solution does not need us
to supplement the notion of hot-plugging to the file-system and block
session interfaces, which would otherwise inflate the complexity of
these interfaces (and thereby all the clients that rely on them).
Martijn and I have been thinking of a way to implement this, and came to
the conclusion that instead of spawning the stack as children of the CLI
component, it might be better to use a new management component
Post by Norman Feske
A custom runtime/management component could monitor the
usb_drv device report and spawn the whole stack if it detects a USB
storage device. The usb_drv's device report does not contain the device
class so far though but adding that to the report is easy.
This is exactly what we're trying to do now. We want to create a custom
component called "media" that monitors usb devices by reading the
report. It provides a service to other components through which they can
request a filesystem session in order to read-write from/to the usb-stick. For
this, it spawns the part_blk and rump_fs components as children if the
usb is plugged in, and kills them once the usb is plugged out. It
rump_fs part_blk
| |
CLI media USB_drv
| | |
init
But this raises a few questions. First, the filesystem interface needs
to be presented to the client somehow. To avoid adding another layer of
indirection into media, essentially duplicating rump_fs's entire API, we
would like the client (in this case CLI) to be directly connected to
rump_fs. The client can then ask media if the USB is connected before
calling a function from rump_fs.
However, this means that rump_fs provides a service, announces it to its
parent (media), and media has to decide what to do with that announce.
It can implement rump_fs as a slave, but that way the entire API needs
to be copied into media so media can present it as its own service to the client.
So we would like media to announce the filesystem service to its parent
(in this case init), so any client can use this service. In the same
way, any session request will be passed from CLI to its parent (init),
init has to pass it to media, and media passes it to rump_fs. However,
the current implementation and specification of genode does not allow
services of any server to exist in any level above the server's parent.
Services can only be provided to direct parents, and to other components
in the parent's subtree. Therefore, copying the API from the child to
the parent seems unavoidable.
Another problem that pops up is that media has to spawn all these
subcomponents as children. In order to route block session requests from
rump-fs to part-blk, media needs to implement some routing policy and
effectively serves the same role for these two components as init serves
1. Copy all necessary code for routing from init to media (which is
almost all code if we want to be generic).
2. Let media spawn another init child component (let's call it sub-init
for now) which in turn spawns rump-fs and part-blk and does the routing.
To us, the second option seems much more clean as it involves no
code-copying. However, services announced by rump-fs can not be used by
other components that are not children of the new init, and are kind of
useless. Their announcements can not be passed on to the parents,
leaving us with the same problem as we had with rump_fs but with the
additional problem that even if there would be a custom way to forward
service announces and requests to the parent/child respectively,
sub-init has no such policy, and this functionality has to be included
in sub-init's code as well, adding a lot of complexity.
Eventually both cases bottle down to the same problem: service
announcements to a parent cannot automatically be forwarded to that
parent's parent, and likewise, service requests need to be able to be
delegated to children of children without a lot of hassle. The only option
if I'm correct is implementing this functionality manually, but this does
not work if the parent is an existing component that does not support it.
Is there a reason this is never done? For init it is clear that it would
never pass an announce to its parent (usually core) or receive session
requests from it. But how about the general case?
And how should we solve cases such as the above scenario?
kind regards,
Boris Mulder
Cyber Security Labs B.V. | Gooimeer 6-31 | 1411 DD Naarden | The Netherlands
+31 35 631 3253 (office)
------------------------------------------------------------
------------------
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support from Colfax.
Order your platform today. http://sdm.link/xeonphi
_______________________________________________
genode-main mailing list
https://lists.sourceforge.net/lists/listinfo/genode-main
Loading...