[prev in list] [next in list] [prev in thread] [next in thread] 

List:       lustre-discuss
Subject:    Re: [lustre-discuss] [EXTERNAL] [BULK]  MDS hardware - NVME?
From:       "Vicker, Darby J. \(JSC-EG111\)\[Jacobs Technology, Inc.\] via lustre-discuss" <
Date:       2024-01-08 15:23:21
Message-ID: 6B0EFAAD-1BC0-4CCD-836E-EC67A164D4FB () nasa ! gov
[Download RAW message or body]

[Attachment #2 (text/plain)]

Our setup has a single JBOD connected to 2 servers but the JBOD has dual controllers. \
Each server connects to both controllers for redundancy so there are 4 connections to \
each server.  So we have a paired HA setup where one peer node can take over the \
OSTs/MDTs of its peer node.  Some specifics on our hardware:

Supermicro twin servers:
https://www.supermicro.com/products/archive/system/sys-6027tr-d71frf

JBOD:
https://www.supermicro.com/products/archive/chassis/sc946ed-r2kjbod

Each pair can "zpool import" all pools from either pair.  Here is an excerpt from our \
ldev.conf file


#local  foreign/-  label       [md|zfs:]device-path   [journal-path]/- [raidtab]

# primary hpfs-fsl (aka /nobackup) lustre file system
hpfs-fsl-mds0.fsl.jsc.nasa.gov  hpfs-fsl-mds1.fsl.jsc.nasa.gov  hpfs-fsl-MDT0000  \
zfs:mds0-0/meta-fsl

hpfs-fsl-oss00.fsl.jsc.nasa.gov hpfs-fsl-oss01.fsl.jsc.nasa.gov hpfs-fsl-OST0000  \
zfs:oss00-0/ost-fsl hpfs-fsl-oss00.fsl.jsc.nasa.gov hpfs-fsl-oss01.fsl.jsc.nasa.gov \
hpfs-fsl-OST000c  zfs:oss00-1/ost-fsl

hpfs-fsl-oss01.fsl.jsc.nasa.gov hpfs-fsl-oss00.fsl.jsc.nasa.gov hpfs-fsl-OST0001  \
zfs:oss01-0/ost-fsl hpfs-fsl-oss01.fsl.jsc.nasa.gov hpfs-fsl-oss00.fsl.jsc.nasa.gov \
hpfs-fsl-OST000d  zfs:oss01-1/ost-fsl



If you wanted to fail oss01's OST's over to oss00, you'd do a "service lustre stop" \
on oss01 followed by a "service lustre start foreign" on oss00.  This setup has been \
stable and has served us well for a long time.  Our servers are stable enough that we \
never set up automated failover via corosync or something similar.



From: Vinícius Ferrão <ferrao@versatushpc.com.br>
Date: Sunday, January 7, 2024 at 12:06 PM
To: "Vicker, Darby J. (JSC-EG111)[Jacobs Technology, Inc.]" <darby.vicker-1@nasa.gov>
Cc: Thomas Roth <t.roth@gsi.de>, Lustre Diskussionsliste \
                <lustre-discuss@lists.lustre.org>
Subject: Re: [lustre-discuss] [EXTERNAL] [BULK] MDS hardware - NVME?

CAUTION: This email originated from outside of NASA.  Please take care when clicking \
links or opening attachments.  Use the "Report Message" button to report suspicious \
messages to the NASA SOC.


Hi Vicker may I ask if you have any kind of HA on this setup?

If yes I'm interested on how the ZFS pools would migrate from one server to another \
in case of failure. I'm considering the typical lustre deployment were you have two \
servers attached to two JBODs using a multipath SAS topology with crossed cables: \
|X|.

I can easily understand that when you have Hardware RAID running on the JBOD and SAS \
HBA on the servers, but for a total software solution I'm unaware how that will work \
effectively.

Thank you.


On 5 Jan 2024, at 14:07, Vicker, Darby J. (JSC-EG111)[Jacobs Technology, Inc.] via \
lustre-discuss <lustre-discuss@lists.lustre.org> wrote:

We are in the process of retiring two long standing LFS's (about 8 years old), which \
we built and managed ourselves.  Both use ZFS and have the MDT'S on ssd's in a JBOD \
that require the kind of software-based management you describe, in our case ZFS \
pools built on multipath devices.  The MDT in one is ZFS and the MDT in the other LFS \
is ldiskfs but uses ZFS and a zvol as you describe - we build the ldiskfs MDT on top \
of the zvol.  Generally, this has worked well for us, with one big caveat.  If you \
look for my posts to this list and the ZFS list you'll find more details.  The short \
version is that we utilize ZFS snapshots and clones to do backups of the metadata.  \
We've run into situations where the backup process stalls, leaving a clone hanging \
around.  We've experienced a situation a couple of times where the clone and the \
primary zvol get swapped, effectively rolling back our metadata to the point when the \
clone was created.  I have tried, unsuccessfully, to recreate that in a test \
environment.  So if you do that kind of setup, make sure you have good monitoring in \
place to detect if your backups/clones stall.  We've kept up with lustre and ZFS \
updates over the years and are currently on lustre 2.14 and ZFS 2.1.  We've seen the \
gap between our ZFS MDT and ldiskfs performance shrink to the point where they are \
pretty much on par to each now.  I think our ZFS MDT performance could be better with \
more hardware and software tuning but our small team hasn't had the bandwidth to \
tackle that.

Our newest LFS is vendor provided and uses NVMe MDT's.  I'm not at liberty to talk \
about the proprietary way those devices are managed.  However, the metadata \
performance is SO much better than our older LFS's, for a lot of reasons, but I'd \
highly recommend NVMe's for your MDT's.

-----Original Message-----
From: lustre-discuss \
<lustre-discuss-bounces@lists.lustre.org<mailto:lustre-discuss-bounces@lists.lustre.org> \
<mailto:lustre-discuss-bounces@lists.lustre.org>> on behalf of Thomas Roth via \
lustre-discuss <lustre-discuss@lists.lustre.org<mailto:lustre-discuss@lists.lustre.org> \
                <mailto:lustre-discuss@lists.lustre.org>>
Reply-To: Thomas Roth <t.roth@gsi.de<mailto:t.roth@gsi.de> <mailto:t.roth@gsi.de>>
Date: Friday, January 5, 2024 at 9:03 AM
To: Lustre Diskussionsliste \
<lustre-discuss@lists.lustre.org<mailto:lustre-discuss@lists.lustre.org> \
                <mailto:lustre-discuss@lists.lustre.org>>
Subject: [EXTERNAL] [BULK] [lustre-discuss] MDS hardware - NVME?


CAUTION: This email originated from outside of NASA. Please take care when clicking \
links or opening attachments. Use the "Report Message" button to report suspicious \
messages to the NASA SOC.








Dear all,


considering NVME storage for the next MDS.


As I understand, NVME disks are bundled in software, not by a hardware raid \
controller. This would be done using Linux software raid, mdadm, correct?


We have some experience with ZFS, which we use on our OSTs.
But I would like to stick to ldiskfs for the MDTs, and a zpool with a zvol on top \
which is then formatted with ldiskfs - to much voodoo...


How is this handled elsewhere? Any experiences?




The available devices are quite large. If I create a raid-10 out of 4 disks, e.g. 7 \
TB each, my MDT will be 14 TB - already close to the 16 TB limit. So no need for a \
box with lots of U.3 slots.


But for MDS operations, we will still need a powerful dual-CPU system with lots of \
RAM. Then the NVME devices should be distributed between the CPUs?
Is there a way to pinpoint this in a call for tender?




Best regards,
Thomas


--------------------------------------------------------------------
Thomas Roth


GSI Helmholtzzentrum für Schwerionenforschung GmbH
Planckstraße 1, 64291 Darmstadt, Germany, http://www.gsi.de/ <http://www.gsi.de/>


Commercial Register / Handelsregister: Amtsgericht Darmstadt, HRB 1528
Managing Directors / Geschäftsführung:
Professor Dr. Paolo Giubellino, Dr. Ulrich Breuer, Jörg Blaurock
Chairman of the Supervisory Board / Vorsitzender des GSI-Aufsichtsrats:
State Secretary / Staatssekretär Dr. Volkmar Dietz




_______________________________________________
lustre-discuss mailing list
lustre-discuss@lists.lustre.org<mailto:lustre-discuss@lists.lustre.org> \
<mailto:lustre-discuss@lists.lustre.org> \
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org \
<http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org>



_______________________________________________
lustre-discuss mailing list
lustre-discuss@lists.lustre.org<mailto:lustre-discuss@lists.lustre.org>
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


[Attachment #3 (text/html)]

<html xmlns:o="urn:schemas-microsoft-com:office:office" \
xmlns:w="urn:schemas-microsoft-com:office:word" \
xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" \
xmlns="http://www.w3.org/TR/REC-html40"> <head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<meta name="Generator" content="Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
	{font-family:Helvetica;
	panose-1:0 0 0 0 0 0 0 0 0 0;}
@font-face
	{font-family:"Cambria Math";
	panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:"Times New Roman \(Body CS\)";
	panose-1:2 11 6 4 2 2 2 2 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	font-size:11.0pt;
	font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
span.apple-converted-space
	{mso-style-name:apple-converted-space;}
span.EmailStyle19
	{mso-style-type:personal-reply;
	font-family:"Calibri",sans-serif;
	color:windowtext;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-size:10.0pt;
	mso-ligatures:none;}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
	{page:WordSection1;}
--></style>
</head>
<body lang="EN-US" link="blue" vlink="purple" \
style="word-wrap:break-word;overflow-wrap: break-word;-webkit-nbsp-mode: \
space;line-break:after-white-space"> <div class="WordSection1">
<p class="MsoNormal">Our setup has a single JBOD connected to 2 servers but the JBOD \
has dual controllers.&nbsp; Each server connects to both controllers for redundancy \
so there are 4 connections to each server.&nbsp; So we have a paired HA setup where \
one peer node  can take over the OSTs/MDTs of its peer node.&nbsp; Some specifics on \
our hardware:<o:p></o:p></p> <p class="MsoNormal"><o:p>&nbsp;</o:p></p>
<p class="MsoNormal">Supermicro twin servers:<o:p></o:p></p>
<p class="MsoNormal"><a \
href="https://www.supermicro.com/products/archive/system/sys-6027tr-d71frf">https://www.supermicro.com/products/archive/system/sys-6027tr-d71frf</a><o:p></o:p></p>
 <p class="MsoNormal"><o:p>&nbsp;</o:p></p>
<p class="MsoNormal">JBOD:<o:p></o:p></p>
<p class="MsoNormal"><a \
href="https://www.supermicro.com/products/archive/chassis/sc946ed-r2kjbod">https://www.supermicro.com/products/archive/chassis/sc946ed-r2kjbod</a><o:p></o:p></p>
 <p class="MsoNormal"><o:p>&nbsp;</o:p></p>
<p class="MsoNormal">Each pair can "zpool import" all pools from either pair.&nbsp; \
Here is an excerpt from our ldev.conf file<o:p></o:p></p> <p \
class="MsoNormal"><o:p>&nbsp;</o:p></p> <p class="MsoNormal"><o:p>&nbsp;</o:p></p>
<p class="MsoNormal">#local&nbsp; foreign/-&nbsp; \
label&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; [md|zfs:]device-path&nbsp;&nbsp; \
[journal-path]/- [raidtab]<o:p></o:p></p> <p class="MsoNormal"><o:p>&nbsp;</o:p></p>
<p class="MsoNormal"># primary hpfs-fsl (aka /nobackup) lustre file \
system<o:p></o:p></p> <p class="MsoNormal">hpfs-fsl-mds0.fsl.jsc.nasa.gov&nbsp; \
hpfs-fsl-mds1.fsl.jsc.nasa.gov&nbsp; hpfs-fsl-MDT0000&nbsp; \
zfs:mds0-0/meta-fsl<o:p></o:p></p> <p class="MsoNormal"><o:p>&nbsp;</o:p></p>
<p class="MsoNormal">hpfs-fsl-oss00.fsl.jsc.nasa.gov hpfs-fsl-oss01.fsl.jsc.nasa.gov \
hpfs-fsl-OST0000&nbsp; zfs:oss00-0/ost-fsl<o:p></o:p></p> <p \
class="MsoNormal">hpfs-fsl-oss00.fsl.jsc.nasa.gov hpfs-fsl-oss01.fsl.jsc.nasa.gov \
hpfs-fsl-OST000c&nbsp; zfs:oss00-1/ost-fsl<o:p></o:p></p> <p \
class="MsoNormal"><o:p>&nbsp;</o:p></p> <p \
class="MsoNormal">hpfs-fsl-oss01.fsl.jsc.nasa.gov hpfs-fsl-oss00.fsl.jsc.nasa.gov \
hpfs-fsl-OST0001&nbsp; zfs:oss01-0/ost-fsl<o:p></o:p></p> <p \
class="MsoNormal">hpfs-fsl-oss01.fsl.jsc.nasa.gov hpfs-fsl-oss00.fsl.jsc.nasa.gov \
hpfs-fsl-OST000d&nbsp; zfs:oss01-1/ost-fsl<o:p></o:p></p> <p \
class="MsoNormal"><o:p>&nbsp;</o:p></p> <p class="MsoNormal"><o:p>&nbsp;</o:p></p>
<p class="MsoNormal"><o:p>&nbsp;</o:p></p>
<p class="MsoNormal">If you wanted to fail oss01's OST's over to oss00, you'd do a \
"service lustre stop" on oss01 followed by a "service lustre start foreign" on \
oss00.&nbsp; This setup has been stable and has served us well for a long time.&nbsp; \
Our servers are stable  enough that we never set up automated failover via corosync \
or something similar.&nbsp; <o:p></o:p></p>
<p class="MsoNormal"><o:p>&nbsp;</o:p></p>
<p class="MsoNormal"><o:p>&nbsp;</o:p></p>
<p class="MsoNormal"><o:p>&nbsp;</o:p></p>
<div style="border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in 0in 0in">
<p class="MsoNormal"><b><span style="font-size:12.0pt;color:black">From: \
</span></b><span style="font-size:12.0pt;color:black">Vinícius Ferrão \
&lt;ferrao@versatushpc.com.br&gt;<br> <b>Date: </b>Sunday, January 7, 2024 at 12:06 \
PM<br> <b>To: </b>&quot;Vicker, Darby J. (JSC-EG111)[Jacobs Technology, Inc.]&quot; \
&lt;darby.vicker-1@nasa.gov&gt;<br> <b>Cc: </b>Thomas Roth &lt;t.roth@gsi.de&gt;, \
Lustre Diskussionsliste &lt;lustre-discuss@lists.lustre.org&gt;<br> <b>Subject: \
</b>Re: [lustre-discuss] [EXTERNAL] [BULK] MDS hardware - NVME?<o:p></o:p></span></p> \
</div> <div>
<p class="MsoNormal"><o:p>&nbsp;</o:p></p>
</div>
<table class="MsoNormalTable" border="1" cellspacing="0" cellpadding="0" align="left" \
style="border:solid black 1.5pt"> <tbody>
<tr>
<td width="100%" style="width:100.0%;border:none;background:#FFEB9C;padding:3.75pt \
3.75pt 3.75pt 3.75pt"> <p class="MsoNormal" \
style="mso-element:frame;mso-element-frame-hspace:2.25pt;mso-element-wrap:around;mso-e \
lement-anchor-vertical:paragraph;mso-element-anchor-horizontal:column;mso-height-rule:exactly">
 <b><span style="font-size:10.0pt;color:black">CAUTION:</span></b><span \
style="color:black"> </span><span style="font-size:10.0pt;color:black">This email \
originated from outside of NASA.&nbsp; Please take care when clicking links or \
opening attachments.&nbsp; Use the &quot;Report Message&quot; button to report \
suspicious messages to the NASA&nbsp;SOC.</span><span style="color:black"> \
</span><o:p></o:p></p> </td>
</tr>
</tbody>
</table>
<p class="MsoNormal" style="margin-bottom:12.0pt"><br>
<br>
<o:p></o:p></p>
<div>
<p class="MsoNormal">Hi Vicker may I ask if you have any kind of HA on this setup?
<o:p></o:p></p>
<div>
<p class="MsoNormal"><o:p>&nbsp;</o:p></p>
</div>
<div>
<p class="MsoNormal">If yes I'm interested on how the ZFS pools would migrate from \
one server to another in case of failure. I'm considering the typical lustre \
deployment were you have two servers attached to two JBODs using a multipath SAS \
topology with crossed  cables: |X|. <o:p></o:p></p>
<div>
<p class="MsoNormal"><o:p>&nbsp;</o:p></p>
</div>
<div>
<p class="MsoNormal">I can easily understand that when you have Hardware RAID running \
on the JBOD and SAS HBA on the servers, but for a total software solution I'm unaware \
how that will work effectively.<o:p></o:p></p> </div>
<div>
<p class="MsoNormal"><o:p>&nbsp;</o:p></p>
</div>
<div>
<p class="MsoNormal">Thank you.<o:p></o:p></p>
<div>
<div>
<div>
<p class="MsoNormal"><br>
<br>
<o:p></o:p></p>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<div>
<p class="MsoNormal">On 5 Jan 2024, at 14:07, Vicker, Darby J. (JSC-EG111)[Jacobs \
Technology, Inc.] via lustre-discuss &lt;lustre-discuss@lists.lustre.org&gt; \
wrote:<o:p></o:p></p> </div>
<p class="MsoNormal"><o:p>&nbsp;</o:p></p>
<div>
<p class="MsoNormal"><span style="font-size:9.0pt;font-family:Helvetica">We are in \
the process of retiring two long standing LFS's (about 8 years old), which we built \
and managed ourselves. &nbsp;Both use ZFS and have the MDT'S on ssd's in a JBOD that \
require the  kind of software-based management you describe, in our case ZFS pools \
built on multipath devices. &nbsp;The MDT in one is ZFS and the MDT in the other LFS \
is ldiskfs but uses ZFS and a zvol as you describe - we build the ldiskfs MDT on top \
of the zvol. &nbsp;Generally,  this has worked well for us, with one big caveat. \
&nbsp;If you look for my posts to this list and the ZFS list you'll find more \
details. &nbsp;The short version is that we utilize ZFS snapshots and clones to do \
backups of the metadata. &nbsp;We've run into situations where  the backup process \
stalls, leaving a clone hanging around. &nbsp;We've experienced a situation a couple \
of times where the clone and the primary zvol get swapped, effectively rolling back \
our metadata to the point when the clone was created. &nbsp;I have tried, \
unsuccessfully,  to recreate that in a test environment. &nbsp;So if you do that kind \
of setup, make sure you have good monitoring in place to detect if your \
backups/clones stall. &nbsp;We've kept up with lustre and ZFS updates over the years \
and are currently on lustre 2.14 and ZFS  2.1. &nbsp;We've seen the gap between our \
ZFS MDT and ldiskfs performance shrink to the point where they are pretty much on par \
to each now. &nbsp;I think our ZFS MDT performance could be better with more hardware \
and software tuning but our small team hasn't had the  bandwidth to tackle that.<br>
<br>
Our newest LFS is vendor provided and uses NVMe MDT's. &nbsp;I'm not at liberty to \
talk about the proprietary way those devices are managed. &nbsp;However, the metadata \
performance is SO much better than our older LFS's, for a lot of reasons, but I'd \
highly recommend  NVMe's for your MDT's.<br>
<br>
-----Original Message-----<br>
From: lustre-discuss &lt;</span><a \
href="mailto:lustre-discuss-bounces@lists.lustre.org"><span \
style="font-size:9.0pt;font-family:Helvetica">lustre-discuss-bounces@lists.lustre.org</span></a><span \
class="apple-converted-space"><span \
style="font-size:9.0pt;font-family:Helvetica">&nbsp;</span></span><span \
style="font-size:9.0pt;font-family:Helvetica">&lt;</span><a \
href="mailto:lustre-discuss-bounces@lists.lustre.org"><span \
style="font-size:9.0pt;font-family:Helvetica">mailto:lustre-discuss-bounces@lists.lustre.org</span></a><span \
style="font-size:9.0pt;font-family:Helvetica">&gt;&gt;  on behalf of Thomas Roth via \
lustre-discuss &lt;</span><a href="mailto:lustre-discuss@lists.lustre.org"><span \
style="font-size:9.0pt;font-family:Helvetica">lustre-discuss@lists.lustre.org</span></a><span \
class="apple-converted-space"><span \
style="font-size:9.0pt;font-family:Helvetica">&nbsp;</span></span><span \
style="font-size:9.0pt;font-family:Helvetica">&lt;</span><a \
href="mailto:lustre-discuss@lists.lustre.org"><span \
style="font-size:9.0pt;font-family:Helvetica">mailto:lustre-discuss@lists.lustre.org</span></a><span \
                style="font-size:9.0pt;font-family:Helvetica">&gt;&gt;<br>
Reply-To: Thomas Roth &lt;</span><a href="mailto:t.roth@gsi.de"><span \
style="font-size:9.0pt;font-family:Helvetica">t.roth@gsi.de</span></a><span \
class="apple-converted-space"><span \
style="font-size:9.0pt;font-family:Helvetica">&nbsp;</span></span><span \
style="font-size:9.0pt;font-family:Helvetica">&lt;</span><a \
href="mailto:t.roth@gsi.de"><span \
style="font-size:9.0pt;font-family:Helvetica">mailto:t.roth@gsi.de</span></a><span \
                style="font-size:9.0pt;font-family:Helvetica">&gt;&gt;<br>
Date: Friday, January 5, 2024 at 9:03 AM<br>
To: Lustre Diskussionsliste &lt;</span><a \
href="mailto:lustre-discuss@lists.lustre.org"><span \
style="font-size:9.0pt;font-family:Helvetica">lustre-discuss@lists.lustre.org</span></a><span \
class="apple-converted-space"><span \
style="font-size:9.0pt;font-family:Helvetica">&nbsp;</span></span><span \
style="font-size:9.0pt;font-family:Helvetica">&lt;</span><a \
href="mailto:lustre-discuss@lists.lustre.org"><span \
style="font-size:9.0pt;font-family:Helvetica">mailto:lustre-discuss@lists.lustre.org</span></a><span \
                style="font-size:9.0pt;font-family:Helvetica">&gt;&gt;<br>
Subject: [EXTERNAL] [BULK] [lustre-discuss] MDS hardware - NVME?<br>
<br>
<br>
CAUTION: This email originated from outside of NASA. Please take care when clicking \
links or opening attachments. Use the &quot;Report Message&quot; button to report \
suspicious messages to the NASA SOC.<br> <br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
Dear all,<br>
<br>
<br>
considering NVME storage for the next MDS.<br>
<br>
<br>
As I understand, NVME disks are bundled in software, not by a hardware raid \
controller.<br> This would be done using Linux software raid, mdadm, correct?<br>
<br>
<br>
We have some experience with ZFS, which we use on our OSTs.<br>
But I would like to stick to ldiskfs for the MDTs, and a zpool with a zvol on top \
which is then formatted with ldiskfs - to much voodoo...<br> <br>
<br>
How is this handled elsewhere? Any experiences?<br>
<br>
<br>
<br>
<br>
The available devices are quite large. If I create a raid-10 out of 4 disks, e.g. 7 \
TB each, my MDT will be 14 TB - already close to the 16 TB limit.<br> So no need for \
a box with lots of U.3 slots.<br> <br>
<br>
But for MDS operations, we will still need a powerful dual-CPU system with lots of \
RAM.<br> Then the NVME devices should be distributed between the CPUs?<br>
Is there a way to pinpoint this in a call for tender?<br>
<br>
<br>
<br>
<br>
Best regards,<br>
Thomas<br>
<br>
<br>
--------------------------------------------------------------------<br>
Thomas Roth<br>
<br>
<br>
GSI Helmholtzzentrum für Schwerionenforschung GmbH<br>
Planckstraße 1, 64291 Darmstadt, Germany,<span \
class="apple-converted-space">&nbsp;</span></span><a href="http://www.gsi.de/"><span \
style="font-size:9.0pt;font-family:Helvetica">http://www.gsi.de/</span></a><span \
class="apple-converted-space"><span \
style="font-size:9.0pt;font-family:Helvetica">&nbsp;</span></span><span \
style="font-size:9.0pt;font-family:Helvetica">&lt;</span><a \
href="http://www.gsi.de/"><span \
style="font-size:9.0pt;font-family:Helvetica">http://www.gsi.de/</span></a><span \
style="font-size:9.0pt;font-family:Helvetica">&gt;<br> <br>
<br>
Commercial Register / Handelsregister: Amtsgericht Darmstadt, HRB 1528<br>
Managing Directors / Geschäftsführung:<br>
Professor Dr. Paolo Giubellino, Dr. Ulrich Breuer, Jörg Blaurock<br>
Chairman of the Supervisory Board / Vorsitzender des GSI-Aufsichtsrats:<br>
State Secretary / Staatssekretär Dr. Volkmar Dietz<br>
<br>
<br>
<br>
<br>
_______________________________________________<br>
lustre-discuss mailing list<br>
</span><a href="mailto:lustre-discuss@lists.lustre.org"><span \
style="font-size:9.0pt;font-family:Helvetica">lustre-discuss@lists.lustre.org</span></a><span \
class="apple-converted-space"><span \
style="font-size:9.0pt;font-family:Helvetica">&nbsp;</span></span><span \
style="font-size:9.0pt;font-family:Helvetica">&lt;</span><a \
href="mailto:lustre-discuss@lists.lustre.org"><span \
style="font-size:9.0pt;font-family:Helvetica">mailto:lustre-discuss@lists.lustre.org</span></a><span \
style="font-size:9.0pt;font-family:Helvetica">&gt;<br> </span><a \
href="http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org"><span \
style="font-size:9.0pt;font-family:Helvetica">http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org</span></a><span \
class="apple-converted-space"><span \
style="font-size:9.0pt;font-family:Helvetica">&nbsp;</span></span><span \
style="font-size:9.0pt;font-family:Helvetica">&lt;</span><a \
href="http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org"><span \
style="font-size:9.0pt;font-family:Helvetica">http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org</span></a><span \
style="font-size:9.0pt;font-family:Helvetica">&gt;<br> <br>
<br>
<br>
_______________________________________________<br>
lustre-discuss mailing list<br>
</span><a href="mailto:lustre-discuss@lists.lustre.org"><span \
style="font-size:9.0pt;font-family:Helvetica">lustre-discuss@lists.lustre.org</span></a><span \
style="font-size:9.0pt;font-family:Helvetica"><br> </span><a \
href="http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org"><span \
style="font-size:9.0pt;font-family:Helvetica">http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org</span></a><o:p></o:p></p>
 </div>
</blockquote>
</div>
<p class="MsoNormal"><o:p>&nbsp;</o:p></p>
</div>
</div>
</div>
</div>
</div>
</div>
</body>
</html>



_______________________________________________
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

--===============6949167322175665311==--

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic