> I would like to be able to get updates from Novell for SLES/OES-Linux/NLD
> etc and distribute them.
> The sales blurb implies this can be doen with ZLM, but I cannot find any
> docs on how to actually do it.
> Anyone got any ideas?
> Thanks
> Mike


First, it can be done. The core of the puzzle is configuring and then
using zlmmirror. That is the device which talks to the remote archive and
also the local ZLM data store. The ZLM server is just an accountant, not a
file fetcher.
Second, Germany lacks most of the for-fee patches, Provo has them.
Third, Provo runs ZLM 6.5 which means it presents a Red Carpet interface
to callers such as ourselves.
Thus the controlling .xml file for zlmmirror needs to have configuration
like this:
Base of https://update.novell.com/data
Type of RCE
User hex gibberish from a live OES box, /etc/ximian/mcookie (copy it)
Password ditto, file /etc/ximian/partnetnet
Best not to ask why, but we surmise it is yet another way of saying we are
The credentials for the local ZLM server are typically user administrator
and its password, type of ZLM
The next magic trick is guessing catalog names, and we use the .xml
config file to-date and zlmmirror again to help us:
zlmmirror slc -v -c myconfig.xml
This asks the remote server for its catalogs. In the end the catalogs we
want are named "oes", "nld9", "nld9-extras", "nld9-sdk". Targets can be
used to say nld-9-i586 to restrict downloads to the 32 bit material.
Naturally, credentials used to access the NLD material are taken from a
live NLD machine, not an OES machine.

For SLES9 we go to Germany which runs YOU, not RC nor ZLM7 -
Base of http://sdb.suse.de/download
User your SuSE portal username
Password ditto, not the hex gibberish stuff
Type of YAST

Local ZLM setup is the same as above for OES et al
Catalog is sles-9-i586

We then fetch files by saying
zlmmirror mirror -v -c myconfig.xml (-v is verbose so we can watch)
Doing all of the above will take many hours, and many GB.
You may ask why is not OES sufficient since SLES9 is its base. It has to
do with the way the ZLM database collects information and ties it to a kind
of machine. Not clever, but it does work.
The chances of this working to completion are finite but not 100% as any
tiny problem causes ZLM to give up entirely. ZLM's handling of its
accumulated data is "dirty" in that old things are not cleaned up. I have
had to wipe out contents of /var/opt/novell/zenworks/pkg-repo/directories
after removing the logical names via the web server page.
Joe Doupnik