Discussion:
[dash-dev] Maven (again)
Alex Blewitt
2012-09-06 21:33:51 UTC
Permalink
I've updated the Maven configuration so that it shouldn't have the same duplication of data for repo1/repo2 as there's now a 'central' which has repo1/repo2 as mirrors.

I've also discovered that basically the jobs on nexus are broken, and there's nothing obvious as to why. I suspect a nexus restart may fix it, but since I don't know if it's being used or not, I suggest doing it at a quieter time of day.

The trash appears to build up and never get emptied; the automated empty job appears to fail as well. Plus, the performance of IO on the VM is so hideously bad that deleting files is a real nightmare; to clean up the code, I had to execute:

cd sonatype-work/nexus/trash
for i in */*/*
do
rm -rf $i &
done

Whilst you can theoretically do this in a single rm -rf, the jobs appear to be the only way to do this in a reasonable period of time. As a result we now have almost 1/2 the box space free.

Filesystem Size Used Avail Use% Mounted on
/dev/xvda2 19G 8.3G 9.7G 47% /

To avoid the problems in the future, if Nexus isn't going to play ball, we should consider setting up a cron job to do the same thing, or log in and periodically flush it manually. Perhaps this will be fixed with a Nexus 2.x install instead.

Alex
Denis Roy
2012-09-14 13:59:09 UTC
Permalink
Alex,

Block device performance is never spectacular on these virtualized
machines -- that's why we use them for redundant front-ends.

However, having a single CPU and 1G of RAM is certainly not helping. If
I add a CPU and increase RAM to 3G can I restart the VM? Will Nexus
start itself automatically?

Thanks,

Denis
Post by Alex Blewitt
I've updated the Maven configuration so that it shouldn't have the same duplication of data for repo1/repo2 as there's now a 'central' which has repo1/repo2 as mirrors.
I've also discovered that basically the jobs on nexus are broken, and there's nothing obvious as to why. I suspect a nexus restart may fix it, but since I don't know if it's being used or not, I suggest doing it at a quieter time of day.
cd sonatype-work/nexus/trash
for i in */*/*
do
rm -rf $i &
done
Whilst you can theoretically do this in a single rm -rf, the jobs appear to be the only way to do this in a reasonable period of time. As a result we now have almost 1/2 the box space free.
Filesystem Size Used Avail Use% Mounted on
/dev/xvda2 19G 8.3G 9.7G 47% /
To avoid the problems in the future, if Nexus isn't going to play ball, we should consider setting up a cron job to do the same thing, or log in and periodically flush it manually. Perhaps this will be fixed with a Nexus 2.x install instead.
Alex
Denis Roy
2012-09-14 14:12:36 UTC
Permalink
Actually, in examining the host system more closely, there is a failed
drive... therefore, write-back cache is disabled.

I believe we have a spare here -- I'll see if I can replace it today.

Denis
Post by Denis Roy
Alex,
Block device performance is never spectacular on these virtualized
machines -- that's why we use them for redundant front-ends.
However, having a single CPU and 1G of RAM is certainly not helping.
If I add a CPU and increase RAM to 3G can I restart the VM? Will
Nexus start itself automatically?
Thanks,
Denis
Post by Alex Blewitt
I've updated the Maven configuration so that it shouldn't have the
same duplication of data for repo1/repo2 as there's now a 'central'
which has repo1/repo2 as mirrors.
I've also discovered that basically the jobs on nexus are broken, and
there's nothing obvious as to why. I suspect a nexus restart may fix
it, but since I don't know if it's being used or not, I suggest doing
it at a quieter time of day.
The trash appears to build up and never get emptied; the automated
empty job appears to fail as well. Plus, the performance of IO on the
VM is so hideously bad that deleting files is a real nightmare; to
cd sonatype-work/nexus/trash
for i in */*/*
do
rm -rf $i &
done
Whilst you can theoretically do this in a single rm -rf, the jobs
appear to be the only way to do this in a reasonable period of time.
As a result we now have almost 1/2 the box space free.
Filesystem Size Used Avail Use% Mounted on
/dev/xvda2 19G 8.3G 9.7G 47% /
To avoid the problems in the future, if Nexus isn't going to play
ball, we should consider setting up a cron job to do the same thing,
or log in and periodically flush it manually. Perhaps this will be
fixed with a Nexus 2.x install instead.
Alex
_______________________________________________
dash-dev mailing list
https://dev.eclipse.org/mailman/listinfo/dash-dev
--
*Denis Roy*
Director, IT Services
Eclipse Foundation, Inc. -- http://www.eclipse.org/
Office: 613.224.9461 x224 (Eastern time)
***@eclipse.org
Aaron Digulla
2012-09-14 15:00:02 UTC
Permalink
Post by Denis Roy
Alex,
Block device performance is never spectacular on these virtualized
machines -- that's why we use them for redundant front-ends.
However, having a single CPU and 1G of RAM is certainly not helping.
If I add a CPU and increase RAM to 3G can I restart the VM? Will
Nexus start itself automatically?
Yes, there is a script in init.d.

A second CPU might help but I doubt that more RAM will help that much
:-/ The main problem is disk space because someone decided to mirror
Maven Central - which needs **much** more disk space than my original
plan to host only the converted Maven artifacts on the system.

So I suggest to either give the system 100GB more disk space or move
the Maven Central mirror to a different host.

Regards,
--
Aaron "Optimizer" Digulla a.k.a. Philmann Dark
"It's not the universe that's limited, it's our imagination.
Follow me and I'll show you something beyond the limits."
http://www.pdark.de/ http://blog.pdark.de/
Denis Roy
2012-09-14 15:13:05 UTC
Permalink
Post by Aaron Digulla
Post by Denis Roy
Alex,
Block device performance is never spectacular on these virtualized
machines -- that's why we use them for redundant front-ends.
However, having a single CPU and 1G of RAM is certainly not helping.
If I add a CPU and increase RAM to 3G can I restart the VM? Will
Nexus start itself automatically?
Yes, there is a script in init.d.
A second CPU might help but I doubt that more RAM will help that much :-/
With only 1G and a sizeable java process, there is little room for file
buffers and cache.
Post by Aaron Digulla
The main problem is disk space because someone decided to mirror Maven
Central - which needs **much** more disk space than my original plan
to host only the converted Maven artifacts on the system.
Why don't we discontinue mirroring the planet? Instead, since we have a
proxy server for the Hudson machines, we can enable caching there. That
way the benefit is much wider than just maven central.

Thoughts?

D.
Alex Blewitt
2012-09-14 18:42:43 UTC
Permalink
The main problem is disk space because someone decided to mirror Maven Central - which needs **much** more disk space than my original plan to host only the converted Maven artifacts on the system.
Why don't we discontinue mirroring the planet? Instead, since we have a proxy server for the Hudson machines, we can enable caching there. That way the benefit is much wider than just maven central.
I believe that the maven central part was supposed to be only for the benefit of the local hudson slaves at Eclipse, rather than something which the server was ever really designed to cope with. Unfortunately the 'build' URL included it and I don't know if there is a way to update the settings that the MAven jobs run on Hudson to point elsewhere. If we can get rid of it for something else, so much the better.

Ultimately the http://maven.eclipse.org site should just make the Eclipse hosted maven content available, and not include any other downstream repositories. Or even better, hand it off to the CBI guys to maintain and look after - it's not like the Maven server is used for anything other than the maven-signing-plugin that was put together …

With the platform moving to CBI I suggest we let those POMs be part of the published metadata and they can manage this instance.

Alex
David Carver
2012-09-14 19:12:52 UTC
Permalink
Correct. Maven central was mirrored there, for the benefit of the
Eclipse Hudson jobs, so they didn't have to go out to central all the
time. I think maybe only a handfull of jobs are using it. You can
always just shut the url down, and send out a message to
cross-project-dev letting people know if they are using it for getting
Central artifacts, they need to update their poms.

No settings.xml is being used to my knowledge on any of the hudson
instances currently.

Dave
Post by Alex Blewitt
The main problem is disk space because someone decided to mirror Maven Central - which needs **much** more disk space than my original plan to host only the converted Maven artifacts on the system.
Why don't we discontinue mirroring the planet? Instead, since we have a proxy server for the Hudson machines, we can enable caching there. That way the benefit is much wider than just maven central.
I believe that the maven central part was supposed to be only for the benefit of the local hudson slaves at Eclipse, rather than something which the server was ever really designed to cope with. Unfortunately the 'build' URL included it and I don't know if there is a way to update the settings that the MAven jobs run on Hudson to point elsewhere. If we can get rid of it for something else, so much the better.
Ultimately the http://maven.eclipse.org site should just make the Eclipse hosted maven content available, and not include any other downstream repositories. Or even better, hand it off to the CBI guys to maintain and look after - it's not like the Maven server is used for anything other than the maven-signing-plugin that was put together …
With the platform moving to CBI I suggest we let those POMs be part of the published metadata and they can manage this instance.
Alex
_______________________________________________
dash-dev mailing list
https://dev.eclipse.org/mailman/listinfo/dash-dev
Continue reading on narkive:
Loading...