News feed

Select project:


LHC@Home
2024-03-25: BOINC Needs Votes at a Upcoming UN Forum

BOINC is a finalist for an notable award, and needs your vote (*by Sunday):

Context: The World Summit on the Information Society (WSIS) is a United Nations-sponsored initiative aimed at harnessing the potential of information and communication technologies to build inclusive and equitable information societies worldwide. BOINC has been nominated for a prize at the 2024 forum, and has passed initial hurdles; the next and last step ("Phase 3") requires public votes. The award would be a very nice boost and validation for BOINC and all our projects; if we can get our communities to vote, we should have a decent shot at this point...

Voting is pretty simple, takes just a few minutes; instructions are here.

(*The deadline for votes is Sunday: 31 March 2024, 23:00 UTC+02:00)

2023-12-22: Seasons greetings

Many thanks to all our volunteers for your contributions to LHC@home over the last year!

Some highlights from CERN during 2023 can be seen in this video.

The LHC@home team wishes you a Merry Christmas, restful holidays and all the best for 2024!

2023-09-29: Theory application reaches 6 TRILLION events !!

Bravo to all Theory crunchers !!!

2023-09-17: Downtime Monday 18th

The LHC@home BOINC servers will be degraded on Monday 18th of September due to a database upgrade. BOINC clients are likely to generate errors when trying to download or upload tomorrow morning.

Thanks for your contributions and happy crunching!

2023-01-24: Server upgrade

The LHC@home BOINC servers have been upgraded to the latest server release, 1.4.2.

2022-12-21: Seasons greetings

Many thanks to all our volunteers for your contributions to LHC@home over the last year!

We in the LHC@home team wish you a Merry Christmas and restful holidays.

2022-01-05: Best wishes for 2022

The LHC@home team wishes you all a Happy New Year and all the best for 2022!

The simulations carried out under LHC@home contribute to improvements to the LHC accelerator as well as the experiments. The upcoming Run 3 of the LHC shall start soon.

Meanwhile you can take a look at this video with highlights from CERN during 2021.

Many thanks for your contributions and happy crunching!

2021-12-09: Lack of CMS tasks due to a problem in WMAgent development

Unfortunately, I have been unable to submit new workflows to the CMS project since yesterday, and the job queues have now drained.
The cause is a change introduced in the development of the CMS work-flow management system. These changes are tested first on a development system before being moved to the production system. We currently use the development system to run CMS@Home, so the change is impacting us.
I'm trying to find out when a fix will be forthcoming, but until then set No New Tasks for CMS or switch to another project.
I'm sorry about this. I will let you know when I am able to submit jobs again.

2021-08-26: Power Outage

There is currently a power outage in the CERN computer centre. LHC@home services may be affected.

2021-08-26: CMS App Downtime

Due to an issue with the authentication service used by the CMS App, the job queue has been paused. New jobs will be sent again once the issue has been resolved.

2021-08-21: CMS job queue to drain this weekend (21/08/2021)

Oops, I posted this twice to the -dev board, not once there and again here -- sorry!
CMS is about to release a new version of WMAgent based entirely on python 3. They have asked that they be able to update our agent by Monday evening (23/08), so I will not inject any new workflows before the upgrade. I expect the job queue to drain by late on Sunday.
Please set your CMS application to no new tasks by then.

2021-05-27: 2021 BOINC Workshop

Videos of the talks from the 2021 BOINC Workshop are now available on YouTube. Day 01 includes a talk giving an overview of LHC@home and Day 02 has another talk which provides more details on the specific technology we use. There are many other interesting talks from the other BOINC projects and from the BOINC developers.

2021-03-29: Database issues

Our database cluster is heavily loaded today, and LHC@home services time out from time to time. Our DBA is trying to fix this. Sorry for the trouble and happy crunching.

2021-01-19: Downtime Wed 20/1

LHC@home servers will be down for a database upgrade tomorrow Wednesday 20th of January early afternoon GMT.

Sorry for the inconvenience, and happy crunching!

2021-01-05: Best wishes for 2021

We would like to warmly thank all our volunteers for your contributions during 2020!

The LHC@home team also wishes you a Happy and hopefully healthy 2021!

For those interested, please find some highlights of CERN activities during 2020 and recent findings from the CMS experiment on our web pages.

2020-10-13: Possible upload delays Wednesday 14/10

Due to an upgrade of our Ceph storage on Wednesday 14th of October, there might be delays to file uploads and data assimilation. Should be back to normal by Wednesday evening.

2020-09-07: BOINC database downtime

The LHC@home database will be down for a while this morning, due to a network interruption in the CERN data centre. Hence scheduler requests and uploads will fail for a while.

Sorry for the trouble and happy crunching.

2020-07-14: Interruption to CMS@Home, Wednesday 15th July

We need to interrupt the CMS project tomorrow to deploy a new Workflow Management Agent. This means that jobs will not be available from sometime tonight. We recommend that you set your CMS machines to No New Tasks as soon as possible, to avoid tasks terminating with an error if a job can't be fetched.
We anticipate jobs will be available again late Wednesday (European time). I'll update this thread when it is OK to proceed.

2020-07-01: CMS job rundown

We need to do some tests of a patch to fix a bug that's been plaguing us for some time. To this end, I am letting the job queues drain, so there will be an absence of CMS jobs -- perhaps as soon as tomorrow morning, depending on how we continue to recover from today's Oracle quota problem.
So, be prepared to set No New Tasks as soon as you see any sign of lack of jobs -- or sooner if you prefer.
I don't know how long the testing will take, there are many factors at work (if the BOINC server sees that there are no jobs available, it will stop sending tasks; that will mean it takes longer for each test batch to be recognised and start serving jobs).

2020-06-25: Downtime Saturday

The database underlying LHC@home will be down on Saturday 27th of June due to an upgrade of a DB storage rack.

Hence LHC@home BOINC services will be unavailable for a good part of the day. (Est 5:30-12:30 UTC)

So your BOINC client connections to our servers are likely to fail on Saturday.

Thanks and happy crunching!

2020-05-27: Server outage Wednesday

Due to a failure of a database storage system, a number of database services at CERN failed on Wednesday afternoon. The LHC@home servers were affected as well, as the BOINC database was unavailable and requests timed out.

Sorry for the trouble and happy crunching!

2020-04-24: CERN and COVID-19

Like many organisations, CERN is also affected by the COVID-19 pandemic. Researchers in the CERN community are trying to help out in different ways, as explained on this web page.

As part of this effort to fight COVID-19, we also contribute computing power to Folding@home and Rosetta@home from temporarily available servers that were about to be decommissioned.

During periods like this with little work from LHC@home, we also encourage you to participate in other BOINC projects such as Rosetta@home and contribute to the global fight of the pandemic.

Many thanks for your contributions to LHC@home and continued happy crunching!

With the best of wishes of good health for your and your families from the LHC@home team.

2020-04-14: Server update

The BOINC daemons will be down for minor server update this afternoon. This is to bring our environment to the latest minor server release.

2020-04-09: Theory application reaches 5 TRILLION events !!

LHC@home's Theory application will tomorrow pass the milestone of 5 TRILLION simulated events. This project, under its earlier name "Test4Theory", began production in 2011 and was the first BOINC project anywhere to use Virtual Machine technology (based on CERN's CernVM system).

Over the coming weeks we plan to publish some more details about all this on the LHC@home and CERN websites. Our timetables have of course been affected by the Coronavirus disruptions, but we absolutely could not miss announcing and celebrating such a milestone as this.

The whole LHC@home team sends our sincerest thanks to all our volunteers for enabling this achievement !!

2020-03-10: Downtime Wednesday morning

The LHC@home servers will be down tomorrow Wednesday 11th of March from 6AM to 8AM UTC due to a database intervention.

Hence your BOINC clients may defer uploads or downloads. Thanks for your patience and happy crunching!

2020-03-06: LHC@home web site upgrade

The LHC@home information site on Drupal that includes the FAQ and other information about the applications running on LHC@home is being upgraded now.

Hence links to the FAQ and other information pages about LHC@home will be unavailable for a while today.

The new site will be ready later this afternoon.

-- The team

2020-02-19: CMS@Home -- ongoing problems

Sorry that the CMS@Home HTCondor server is still playing up. Again over the weekend it refused to serve jobs even though plenty were available. Together with Federica we've decided not to inject another workflow this week, to let if "fail hard" again so that she can investigate which ClassAd preferences are not being met.
So, you will probably see the number of running jobs falling, and the number of errors increasing, in the next few days. Please feel free to set No New Tasks in that case. I won't, so that there is still some pressure for jobs on the server. I've also asked Laurence if I can run the CMS@Home VM outside of BOINC, to get around the quota back-off problem.

2020-02-12: CMS@Home up again

OK, jobs are available again. Sorry for the long delay. Remember, I'm only the front-man for a larger crew, so any downstream delays percolate up to my response. Hopefully this will remain good for some time, but I still don't understand why the condor server occasionally refuses to send out jobs in a timely manner.

2020-02-10: CMS@Home accidentally shut down -- Please set No New Tasks

We need to upgrade the CMS@Home WMAgent before Thursday, so I tried to set the workflows to drain down. Unfortunately, I misunderstood the batch states and killed off most of them instead. :-(. There's one still left with about 200 jobs, so that won't last long.
Please set your CMS projects to No New Tasks to avoid getting lots of computation errors. I'll let you know when the upgrade is done and jobs are flowing again.

2020-01-23: Server outage - uploads failing

Due to a network problem in the CERN computer centre early Thursday morning, our BOINC servers have lost access to a storage cluster. Hence uploads are failing and access to web pages as well. Hopefully this should be fixed soon.

2019-11-27: CMS@Home disruption this week

It appears that a database intervention at CERN went badly, leaving our data tables empty and us not being able to submit new CMS@Home jobs. Advice is that it will take several days to recover -- and as well as that some of the major players are in the USA, which has holidays for the rest of this week. I'll keep an eye on it, but I'm doubtful we'll be running again this week. Sorry 'bout that!
Happy Thanksgiving...

2019-11-27: CMS@Home disruption this week

It appears that a database intervention at CERN went badly, leaving our data tables empty and us not being able to submit new CMS@Home jobs. Advice is that it will take several days to recover -- and as well as that some of the major players are in the USA, which has holidays for the rest of this week. I'll keep an eye on it, but I'm doubtful we'll be running again this week. Sorry 'bout that!
Happy Thanksgiving...

2019-11-22: Database intervention Monday morning

LHC@home and associated BOINC services will be unavailable for about 1 hour on Monday 25th of November due to a database storage intervention.

Thanks for your understanding and happy crunching.

2019-11-11: CMS job shortage Wednesday 13th November

CMS IT will be installing a new version of WMAgent on Wednesday. This will impact job availability for the duration of the intervention. We might be able to eliminate the little gremlin that's been plaguing us for the last few weeks, too.
So, please set your CMS processors to No New Tasks sometime tomorrow, Tuesday 12th, so that current tasks will stop requesting new jobs before the queues get cut. I'll let you know when jobs are available again.
Thanks.

2019-11-11: CMS job shortage Wednesday 13th November

CMS IT will be installing a new version of WMAgent on Wednesday. This will impact job availability for the duration of the intervention. We might be able to eliminate the little gremlin that's been plaguing us for the last few weeks, too.
So, please set your CMS processors to No New Tasks sometime tomorrow, Tuesday 12th, so that current tasks will stop requesting new jobs before the queues get cut. I'll let you know when jobs are available again.
Thanks.

2019-09-30: Server upgrade

Following a couple of weeks of tests in the LHC@home development project, we are upgrading our production server cluster to BOINC server release 1.2 this afternoon. During the update we will be running with slightly lower server capacity than usual.

2019-09-12: The SixTrack team at the LHC@Home desk for the CERN open days

Dear volunteers,

thanks to those who have filled in the doodle we circulated last week:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5130&postid=39794#39794

We decided to deliver a presentation every day in the most populated time slots out of the doodle poll, i.e. on Sat. 14th Sep, between 03:00 and 04:00 PM, and on Sun 15th Sep, between 02:00 and 03:00 PM.
The meeting point will be the LHC@Home desk in R2 (building 504), at the beginning of the time slot. We will have to walk few minutes to a meeting room where there will be the presentations. We will be back at the meeting point by the end of the time slot at the latest.

Looking forward to shaking hands and meeting you,
Alessio and Massimo, for the SixTrack team

2019-09-06: Updated server code

We have updated the lhcathome-dev server code to the latest BOINC server release, 1.1.

Please let us know if you should spot any new bug or unexpected behaviour.

2019-09-02: The SixTrack team welcomes the LHC@Home volunteers at the CERN open days

Dear volunteers,

following Nils's post on the MBs:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5129&postid=39763#39763
the SixTrack team is looking into welcoming you at CERN and greet you for the CPU time you make available to us. To do so in the best way, we would like to know when you will be most likely passing by the IT stand, such that we concentrate our efforts on the time when most of you can be there. Hence, please find below a doodle that we will use to target the optimal time window:
https://doodle.com/poll/qpw36awgspufawi7

Thanks a lot in advance, and happy crunching!
Alessio and Massimo, for the SixTrack team

2019-08-30: CERN Open Days in 2 weeks!

During the CERN Open Days 2019, we will have a small LHC@home stand as part of the IT activities in building 504 near the Data Centre.

LHC@home will also be present at the ATLAS experiment site, in the ATLAS Computing Corner.

We hope that many of you will be able to visit CERN during the Open Days and would be happy to see you here!

Please refer to: Plan your visit and the list of activities during the Open days for more information about all the visit points on the CERN sites.

2019-08-21: Many queued tasks - server status page erratic

Due to the very high number of queued Sixtrack tasks, we have enabled 4 load-balanced scheduler/feeder servers to handle the demand. (Our bottleneck is the database, but several schedulers can cache more tasks to be dispatched.)

Our server status page does not currently show in real time the daemon status on remote servers. Hence the server status page may indicate a varying number of processes, depending on which web server is active.

Please also be patient if you are not getting tasks for your preferred application quickly enough. After a few retries, there will be some tasks. Thanks for your understanding and happy crunching!

---the team

2019-07-17: CMS@Home disruption, Monday 22nd July

I've had the following notice from CERN/CMS IT:

>> following the hypervisor reboot campaign, as announced by CERN IT here: https://cern.service-now.com/service-portal/view-outage.do?n=OTG0051185
>> the following VMs - under the CMS Production openstack project - will be rebooted on Monday July 22 (starting at 8:30am CERN time):
...
>> | vocms0267 | cern-geneva-b | cms-home

to which I replied:
> Thanks, Alan. vocms0267 runs the CMS@Home campaign. Should I warn the volunteers of the disruption, or will it be mainly transparent?

and received this reply:
Running jobs will fail because they won't be able to connect to the schedd condor_shadow process. So this will be the visible impact on the users. There will be also a short time window (until I get the agent restarted) where there will be no jobs pending in the condor pool.
So it might be worth it giving the users a heads up.

So, my recommendation is that you set "No New Tasks" for CMS@Home sometime Sunday afternoon, to let tasks complete before the 0830 CST restart. I'll let you know as soon as Alan informs me that vocm0267 is up and running again

2019-07-05: Native ATLAS and Theory applications require a CVMFS configuration update

Volunteers running ATLAS native and/or Theory native are kindly asked to update their local CVMFS configuration. Please see the following post for the details.

2019-06-26: Server downtime

Our BOINC servers were unavailable from 13:45 to 15:30 CET this afternoon due to a problem with a shared storage cluster. This explains possible download/upload errors from your clients.

Sorry for the trouble and happy crunching.

2019-06-18: killing extremely long SixTrack tasks

Dear all,

we had to kill ~10k WUs named:
w-c*_job*__s__62.31_60.32__*__7__*_sixvf_boinc*
due to a mismatch between the requested disk space and that actually necessary to the job.
These tasks would anyway be killed by the BOINC manager at a certain point with an EXIT_DISK_LIMIT_EXCEEDED message - please see:
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5062
for further info.

These tasks cover 10^7 LHC turns, a factor 10 larger than usual, with files building up in dimension until the limit is hit.

The killing does not involve all tasks with such names - I have killed only those that should cover the stable part of the beam; these tasks are expected to last long and hence reach the limit in disk usage. The other WUs should see enough beam losses that the limit is not reached - please post in this thread if this is not the case. The cherry-picking killing was done in the effort of preserving as much as possible tasks already being crunched or pending validation.

As soon as you update the LHC@project on your BOINC manager you should see the task being killed.

We will resubmit soon the same tasks, with appropriate disk requirements.
Apologies for the disturbance, and thanks for your understanding.
A.

2019-06-07: Using a local proxy to reduce network traffic for CMS

Thanks to computezrmle, with additional work from Laurence and a couple of CMS experts (and my adding one line to the site-local-config file) there is now a way to set up a local caching proxy to greatly reduce your network traffic. Each job instance that runs within s CMS BOINC task must retrieve a lot of set-up data from our database. This data doesn't change very often, so if you keep a local copy the job can access that rather than going over the network every time.
Instructions on how to do this are available at https://lhcathomedev.cern.ch/lhcathome-dev/forum_thread.phpp?id=475&postid=6396 or https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5052&postid=39072

2019-06-04: new exes for SixTrack 5.02.05

Dear volunteers,

we are pleased to announce the release to production (SixTrack app) of new exes for the current pro version (v5.02.05). We have new exes for FreeBSD (avx/sse2), an exe for XP hosts (32bits), an aarch64 executable for Linux, and one for Android. Many thanks to James, Kyrre and Veronica for finding the time to produce them.

Distributing an exe compatible with XP hosts is not a way to encourage people to stay with unsupported OSs, but rather a trial to have a smooth transition to more recent OSs. In this way, people with XP hosts do not miss the possibility to contribute to the present wave of SixTrack tasks (expected to be quite long) while considering options for upgrading their hosts. At the same time, we are looking into preparing 32bits Linux exes. It should be noted that all Win exes are distributed without targeting specific kernel versions - hence, XP hosts may receive tasks with regular Windows exes immediately failing, but the BOINC server should quickly learn that the XP-compatible exe is the appropriate one.

We are also very happy to start involving freeBSD and Android users in our production chain. For the latter platform, the present exe won't run on Android versions >=8 - James is still looking into this. Since the android version filtering needs a fix on the scheduler side:
https://github.com/BOINC/boinc/issues/3172
we labelled the Android exe as beta. Hence, Sixtrack beta users with Android 8 and later should not request tasks for that host or untick the test applications flag in their LHC@home project preferences.

We are pursuing also the generation of MacOS exes, and we should test them soon on sixtracktest.

Thanks for your continuous support and help,
Alessio, for the SixTrack team

2019-05-20: 2019 BOINC Pentathlon is over - a big thank you from the SixTrack team!

Dear volunteers,

the 2019 pentathlon is over, and we would like to thank all the participants for having crunched our tasks! We saw the BOINC CPU capacity almost doubled, boosting our calculations, even though it was only for few days. We are very grateful for that!

The SixTrack team would also like to thank all you volunteers who regularly support us with your CPUs. You give us the possibility to deepen our understanding of the dynamic aperture, a quantity of paramount importance for the stability of particle beams in big research accelerators like superconducting colliders - last but not least a very recent paper on the most important journal in the field of accelerator physics, comparing simulations and measurements:
https://journals.aps.org/prab/pdf/10.1103/PhysRevAccelBeams.22.034002
where simulation results have been obtained thanks to you and BOINC!

A lot has been already done with your help, but a lot more has still to come in the next future. We count on your support!
Keep up the good work,
Alessio and Massimo, for the SixTrack team

2019-05-15: BOINC Pentathlon - Sixtrack sprint

We are very grateful to have been chosen for the BOINC Pentathlon of SETI Germany over the next days. For this, the Sixtrack team has submitted a huge backlog of jobs, and our servers will primarily distribute Sixtrack tasks over the next days. There will only be drip-feed of other applications for now until our backlog is reduced. For fans of other applications, stay tuned or run Sixtrack for a few days.

2019-05-14: CMS -- Please set "no new tasks"

Hi to all CMS-ers. We need to drain the job queue so that a new version of the WMAgent can be installed.
Can you please set No New Tasks so that your current tasks can run out and no new jobs start? If you have any tasks waiting to run, please suspend or abort them.
Thanks, I'll let you know as soon as the change is done.

2019-05-13: Database problems

We are having database problems and have to schedule an intervention at 3:30pm UTC. The LHC@home servers are back again. We may have some irregular dispatching of some applications over the next hours.

2019-05-13: Native Theory Application (TheoryN) Released

The Native Theory Application for Linux has moved out of Beta status and is now generally available. It is similar to the Native ATLAS application in that it requires CVMFS to be installed locally but does not require Singulariy as it uses Linux Containers (runc). To setup your machine for this application please follow the instructions.. Even if the Native ATLAS tasks are running successfully, follow the instructions to ensure that CVMFS is configured correctly for both and that Linux Containers are enabled. This is a new application (TheoryN) rather than an alternative version of the Theory application as they have different resources requirement. If there are any issues, please post them to the Theory messages board.

2019-05-03: new SixTrack version 5.02.05 released on BOINC for production

Dear volunteers,

after a long period of development and testing, we are pleased to announce that we have on BOINC a new major release of SixTrack. The development team made an impressive job to re-factorise the code, porting arrays to dynamic memory allocation, splitting the source code (gathered in few, huge source files) into fortran90 modules, making maintenance easier and deleting a lot of duplicated code and massive arrays - without mentioning countless bug fixes, documentation updates, re-written input parsing, improved build system and test suite.

We have also implemented plenty of new features. Most of them are still available only on the batch system as CERN (e.g. linking to Geant4 or Pythia, running coupled to FLUKA or other external codes, support for ROOT and HDF5), but many of them can be already deployed by BOINC jobs, like on-line aperture checking, electron lenses, generalised RF-multipoles, quadrupole fringe fields, and hashing of files for checks. All these new features will allow us to study new machine configurations and refine results, and we count on your help!

Thanks again for your support, and keep up the good work!

Alessio, for the SixTrack Team

2019-04-18: Problem writing CMS job results; please avoid CMS tasks until we find the reason

Since some time last night CMS jobs appear to have problems writing results to CERN storage (DataBridge). It's not affecting BOINC tasks as far as I can see, they keep running and credit is given. However, Dashboard does see the jobs as failing, hence the large red areas on the job plots.
Until we find out where the problem lies, it's best to set No New Tasks or otherwise avoid CMS jobs. I'll let you know when things are back to normal again.

2019-03-23: CMS jobs

The batch I submitted last night is now showing on the monitor, so you can resume tasks at will.

2019-03-23: Warning: possible shortage of CMS jobs - set No New Tasks as a precaution

There was an intervention (i.e. upgrade) yesterday afternoon[1] on the cmsweb-testbed system we use to submit CMS workflows that left things a bit confused. One problem was fixed, and the monitor shows all good. However, we are running out of CMS jobs -- maybe 10 hours left -- but the new batch I submitted yesterday isn't showing up on the testbed monitor. I submitted another last night but still neither are being shown this morning, so I submitted yet another batch.
At the moment I don't know whether the submission has failed or whether the monitor hasn't picked up the new batches. As a precaution, set No New Tasks on your CMS project(s) to avoid tasks crashing due to lack of jobs. I'll let you know as soon as I'm sure jobs are available again.

[1] How many times do I have to tell people not to touch critical systems on a Friday -- especially Friday afternoon!?

2019-02-20: CERN Open Days 2019

CERN Open Days 2019

2019-02-14: BOINC Open Source Project Looking for Experienced Macintosh Developers

The Berkeley Open Infrastructure for Network Computing (BOINC) system is the software infrastructure used by LHC@home and many other volunteer distributed computing projects. The BOINC Open Source Project is looking for volunteers to develop and maintain the BOINC client on Macintosh. The BOINC Client and Manager are C++ cross-platform code supporting MS Windows, Mac, Linux, and several other operating systems. We currently have a number of volunteer developers supporting Windows and Linux, but our main Mac developer is winding down his involvement after many years. He is prepared to help a few new Mac developers get up to speed.

If you have Mac development experience and are interested in volunteering time to help support and maintain the BOINC Mac client please have a look at the more detailed description here: ?https://boinc.berkeley.edu/trac/wiki/MacDeveloper

If you are not a Mac developer, but have other skills and are interested in contributing to BOINC, the link above also has more general information.

2019-01-09: Consent required to export statistics

Following the implementation of GDPR compliance with BOINC, user consent is now required to export BOINC statistics from LHC@home to BOINC statistics sites, such as BOINC stats.

To grant your consent, please login to the LHC@home site and update your project preferences. Once logged on to the LHC@home site, please navigate to the Project Preferences page.

Click on "Edit preferences" and then tick the box on the line:
"Do you consent to exporting your data to BOINC statistics aggregation Web sites?"

This will enable continued export of statistics from LHC@home for your BOINC user account. If you leave the box unchecked, statistics should no longer be exported.

Thanks for your contributions to LHC@home!

2018-12-21: Seasons greetings from LHC@home

Dear volunteers,

We in the LHC@home team wish you all a Merry Christmas and Happy New Year!

Our warm thanks to all of you for your contributions to LHC@home!

2018-12-10: Server upgrade

The LHC@home BOINC servers will be upgraded to the latest BOINC server release Tuesday morning at 8AM GMT. BOINC services like upload/download and task validation and assimilation will be paused for about 1 hour during the intervention to update our servers.

2018-11-19: Pausing submission of LHCb Applications

Dear BOINC Volunteers,

LHCb has been very grateful to the BOINC community in the past years for their support and provisioning of computing resources to run LHCb simulation jobs. Since the start of the service for LHCb you have provided computing resources that allowed us to execute a fantastic amount of 3.1 Million successful jobs which simulated 142'740'087 events. This work considerably contributed to the work of the experiment. Many thanks to you all !!!

Despite this success we have also observed that the work in connection to BOINC operations has grown in the past within the LHCb computing project and after internal discussions we have decided to pause the operations of the service and therefore not to run LHCb applications via BOINC for the time being with the possibility to re-open the service in the future.

Please note that the possibility to contribute computing resources to other BOINC projects stays untouched by this decision and we would like to encourage you to continue supporting also the other projects represented via the LHC@home BOINC service.

For now I would like to re-state my thanks to you, the BOINC community, for your support.

Best regards

Dr. Stefan Roiser
LHCb Computing Project Leader
stefan.roiser@cern.ch

2018-10-19: SixTrack news

Dear Volunteers,

In spite of the break and lack of simulation work things are moving behind the scenes! Most of the trackers have been busy with the preparation of and attendance to the HiLumi annual collaboration meeting. For instance:
* new scanning parameters for DA studies, to shed some light on open points concerning the different behavior of the two beams in the LHC:
https://indico.cern.ch/event/743633/contributions/3071974/attachments/1695257/2728719/VanderVeken_phase_dp.pdf
* an update of DA results to the latest developments on HL-LHC optics:
https://indico.cern.ch/event/742082/contributions/3085158/attachments/1736226/2808309/nkarast_HLCollab_18102018.pdf

The collaboration meeting is the most important event of the large collaboration, led by CERN, that is designing and building the High-Luminosity upgrade of the LHC. This is not only a forum to present and discuss recent results, but also an event that inspires new ideas and studies. Therefore, we would like to announce that in few weeks we will be back to you, counting on your usual fantastic and essential support, to launch new simulation campaigns!

Stay tuned!

Alessio and Massimo, for the SixTrack team

2018-10-04: Unexpected server downtime

Due to a failure on part of our computing infrastructure that also prevented our fail-over mechanism to work, the LHC@home web server was unavailable until this morning. Sorry for this, and thanks for your contributions to our project.

2018-06-27: test of SixTrack 5.00.00

Dear all,
we are in the process of testing a new sixtrack version, i.e. 5.00.00. This is a true upgrade of the code, which has been re-factored deeply - including dynamic memory allocation. Moreover, it provides fixes to the physics already implemented, e.g. solenoidal fields and online aperture checking, and brand new implementations, e.g. electron lenses and ion tracking. We are finalising the implementations, hence the version running as sixtracktest is a quick test of the main functionalities and code re-factoring.
More to come in the next days / weeks.
Thanks a lot for your precious help!
Keep up the good work, and happy crunching!
Alessio, for the SixTrack team

2018-04-23: CMS production pause

We have run into a problem with the CMS project -- the merged result files processed at CERN are failing to be written to central storage. Consequently I have decided not to submit any more jobs until the experts have clarified what the problem is. The CMS jobs queue is about to start draining and I expect it to be empty of volunteer jobs within a few hours (there may still be post-production jobs, but these run at CERN, not on your machines). I suggest you set No New Tasks or transfer to another project until the situation is resolved.

2018-04-05: CERN network problem

There was a major network problem at CERN this morning. It has apparently been resolved but not yet understood, according to the above link.

2018-03-26: Server upgrade - file uploads paused

We will change the storage back-end on our BOINC servers today, and the file servers will be disabled during the operation.

Hence your BOINC clients will get not be able to upload or download files from LHC@home today for a few hours. Once our maintenance operation is finished, BOINC clients will be able to upload again.

Thanks for your understanding and happy crunching!

2018-03-14: Theory application reaches 4 TRILLION events today !!

LHC@home's Theory application today passed the milestone of 4 TRILLION simulated events. This project, under its earlier name "Test4Theory", began production in 2011 and was the first BOINC project to use Virtual Machine technology (based on CERN's CernVM system).

We will be publishing some more details for you on the LHC@home and CERN websites over the coming days. Here is a first release:

http://lhcathome.web.cern.ch/articles/test4theory/test4theory-tops-4-trillion-events

Many thanks to all our volunteers for enabling this achievement !

2018-02-22: CMS Job queue draining

Due to a problem with the WMAgent submission task, a new batch of CMS jobs is not being put in the Condor queue. So, the queue is now draining and there will be no more jobs available in a couple of hours. Best to set your BOINC instance to No New Tasks if you can, to avoid spurious compute error terminations.

2018-02-05: Task creation delayed - database maintenance

Due to a database issue last week, task generation is delayed and we need to clean up stuck workunits. The project daemons will be on and off this morning while we try to debug a problem with the BOINC transitioner.

2018-01-23: Thanks for supporting SixTrack at LHC@Home and updates

Dear volunteers,

All members of the SixTrack team would like to thank each of you for supporting our project at LHC@Home. The last weeks saw a significant increase in work load, and your constant help did not pause even during the Christmas holidays, which is something that we really appreciate!

As you know, we are interested in simulating the dynamics of the beam in ultra-relativistic storage rings, like the LHC. As in other fields of physics, the dynamics is complex, and it can be decomposed into a linear and a non-linear part. The former allows the expected performance of the machine to be at reach, whereas the latter might dramatically affect the stability of the circulating beam. While the former can be analysed with the computing power of a laptop, the latter requires BOINC, and hence you! In fact, we perform very large scans of parameter spaces to see how non-linearities affect the motion of beam particles in different regions of the beam phase space and for different values of key machine parameters. Our main observable is the dynamic aperture (DA), i.e. the boundary between stable, i.e. bounded, and unstable, i.e., unbounded, motion of particles.

The studies mainly target the LHC and its upgrade in luminosity, the so-called HL-LHC. Thanks to this new accelerator, by ~2035, the LHC will be able to deliver to experiments x10 more data than what is foreseen in the first 10/15y of operation of LHC in a comparable time. We are in full swing in designing the upgraded machine, and the present operation of the LHC is a unique occasion to benchmark our models and simulation results. The deep knowledge of the DA of the LHC is essential to properly tune the working point of the HL-LHC.

If you have crunched simulations named "workspace1_hl13_collision_scan_*" (Frederik), then you have helped us in mapping the effects of unavoidable magnetic errors expected from the new hardware of the HL-LHC on dynamic aperture, and identify the best working point of the machine and correction strategies. Tasks named like "w2_hllhc10_sqz700_Qinj_chr20_w2*" (Yuri) focus the attention onto the magnets responsible for squeezing the beams before colliding them; due to their prominent role, these magnets, very few in number, have such a big impact on the non-linear dynamics that the knobs controlling the linear part of the machine can offer relevant remedial strategies.

Many recent tasks are aimed at relating the beam lifetime to the dynamic aperture. The beam lifetime is a measured quantity that tells us how long the beams are going to stay in the machine, based on the current rate of losses. A theoretical model relating beam lifetime and dynamic aperture was developed; a large simulation campaign has started, to benchmark the model against plenty of measurements taken with the LHC in the past three years. One set of studies, named "w16_ats2017_b2_qp_0_ats2017_b2_QP_0_IOCT_0" (Pascal), considers as main source of non-linearities the unavoidable multipolar errors of the magnets, whereas tasks named as "LHC_2015*" (Javier) take into account the parasitic encounters nearby the collision points, i.e. the so called "long-range beam-beam effects".

One of our users (Ewen) is carrying out two studies thanks to your help. In 2017 DA was directly measured for the first time in the LHC at top energy, and nonlinear magnets on either side of ATLAS and CMS experiments were used to vary the DA. He wants to see how well the simulated DA compares to these measurements. The second study seeks to look systematically at how the time dependence of DA in simulation depends on the strength of linear transverse coupling, and the way it is generated in the machine. In fact, some previous simulations and measurements at injection energy have indicated that linear coupling between the horizontal and vertical planes can have a large impact on how the dynamic aperture evolves over time.

In all this, your help is fundamental, since you let us carry out the simulations and studies we are interested in, running the tasks we submit to BOINC. Hence, the warmest "thank you" to you all!
Happy crunching to everyone, and stay tuned!

Alessio and Massimo, for the LHC SixTrack team.

2018-01-23: LHC@home down-time due to system updates

Tomorrow Wednesday 24/1, the LHC@home servers will be unavailable for a short period while our storage backend is taken down for a system update.

Today, Tuesday 23/1, some of the Condor servers that handle CMS, LHCb and Theory tasks will be down for a while. Regarding the on-going issues with upload of files, please refer to this thread.

Thanks for your understanding and happy crunching!

2018-01-16: Short interruptions Tuesday

There will be a couple of short server outages while our BOINC service pass to fail-over nodes today, Tuesday 16th of January. Similar interruptions will happen next week, as we carry out security updates on our computing infrastructure.

2018-01-08: File upload issues

Our NFS storage backend got saturated and hence uploads are failing intermittently.

The underlying cause is an issue with file deletion, we are trying to resolve that.

Sorry for the trouble and thanks for your patience with transfers to LHC@home.

2017-12-14: Increased file server capacity

Since Tuesday evening, we have had intermittent issues with upload failures due to a combination of a large number of new hosts running BOINC that co-incidentally joined at the same time as larger ATLAS tasks had been introduced. Our file server capacity has been increased and backlog tasks waiting for upload should upload again soon. (Please refer to the ATLAS application and Number crunching forums for more details.)

2017-12-13: Missing accounts

Due to a too aggressive spam-cleaning campaign (our fault), we have accidentally deleted some valid accounts on Monday. We have restored a backup copy of the BOINC database, and will recover the missing account data.

If you get a message "missing account key" from your BOINC client, you may be affected. We expect that we can fix this later today, once we have verified the data sets. Hence there is no need to register again.

My apologies for this mishap.

2017-12-05: BOINC server update Thursday

We will upgrade the LHC@home web servers to the new BOINC server code with the "Bootstrap" theme on Thursday 7 December. The new style and layout can already be seen on the LHC@home development project..

During the intervention, from 09 UTC on Thursday, there may be intermittent availability of the LHC@home servers, so BOINC clients may back off and try to upload data later.

2017-09-29: Phaseout of legacy site: lhcathomeclassic.cern.ch/sixtrack

LHC@home has been consolidated and uses SSL for communication as mentioned in this thread last year.

Some BOINC clients are still connecting to the old lhcathomeclassic.cern.ch/sixtrack address, that will be phased out soon.

If this is the case for you, please re-attach the project to the current LHC@home URL presented in the BOINC manager. (http://lhcathome.cern.ch will redirect your BOINC client to https://lhcathome.cern.ch/lhcathome )

For those who are still running an old BOINC 6 client, please upgrade to BOINC 7.2 or later. (The current BOINC client releases are 7.6.33 or 7.8.)

Many thanks for your contributions to LHC@home!

2017-09-26: CMS jobs unavailable Weds 27th September

An upgrade to the CMS@Home workflow management system (WMAgent) is planned for tomorrow (Wed Sep 27th). This needs the current batch of jobs to be stopped so that the queue is empty. I plan to do this about 0700-0800 UTC on Wednesday.
To avoid "error while computing" task failures and the resulting back-off of your daily quotas, we suggest you set all your CMS machines to No New Tasks at least 12 hours beforehand to allow current tasks to time out in the normal way. You can stop BOINC once all your tasks are finished, if you wish.
Exactly how long the intervention will take is unclear, and there will be a delay of up to an hour to get a new batch of jobs queued afterwards. I will post here when jobs are available again, hopefully before the end of the day European time.

2017-09-18: Possible systems failures

We seem to be in the early stages of a system failure for several sub-projects. The proxy server has flatlined and my running jobs monitor is dipping alarmingly. Please check if you are getting tasks flagged as computing failures, and set No New Tasks if so.
[Edit] On closer inspection, it may just be the CMS app. [/Edit]
Obviously I'll apologise if this is a false alarm, but it's the wrong time of day to expect a prompt response from the CERN admins.

2017-09-05: New SixTrack exes

Dear all,

After testing them as the sixtracktest app, we have just pushed out executables to the sixtrack app. For the moment, we have exes only targeted for the main OSs, i.e. Windows, Linux, and the brand new one for MacOS. We are still finalising the definition of the plan classes with the sixtracktest app for targeting Android, freeBSD, and Arm CPUs - e.g. see
https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=4296&postid=32169#32169

Thanks a lot for your contribution and ... happy crunching!
Alessio, for the sixtrack team

2017-08-16: Deadline change for ATLAS jobs

Due to the tight deadline of the ATLAS tasks, we change to deadline of ATLAS jobs from 2 weeks to 1 week. The ATLAS job takes about 3-4 CPU hours to finish on a moderate CPU (2.5GFLOPS).

2017-08-10: New ATLAS app version released for Linux hosts

We released a new version of the ATLAS app today, 2.41 for the x86_64-pc-linux-gnu platform.
The new features of this version include:
1. It requires the host OS to be either Scentific Linux 6 or Cent OS 7.
2. It require CVMFS and Singularity instead of Virtualbox to run the ATLAS jobs.
3. It is more efficient, as the avoidance of using Virtualbox.
Currently, this version is set to beta version.

For people who want to try it out,we provide a script to install everything including CVMFS, singularity here,


Try it if you are interested!

2017-08-05: CMS Weekend problem

Warning: The WMAgent which controls CMS jobs appears to have a failed component very recently. Queue seems to be exhausted. Please set No New Tasks or change to a backup app while I try to raise someone at CERN to fix it. This could be a problem given that this is expected to be the heaviest weekend of the year for holiday travel in Europe...

2017-07-27: Optimising distribution of SixTrack tasks

Dear Volunteers,

we are trying to improve the distribution of SixTrack tasks. If your host could process more tasks but during the project update you don't receive any, can you let us know and send us your client logging report? Please continue the thread "SixTrack Tasks NOT being distributed" opened by Eric here:
http://lhcathome.cern.ch/sixtrack/forum_thread.php?id=4324
so that we can collect all the issues in only one place. In this way, we could try to better tune parameters controlling the distribution of tasks on the server side.

At the same time, we apologize for the loss of credits following the accidental deletion of lines in the main DB - please see message:
http://lhcathome.cern.ch/sixtrack/forum_thread.php?id=4362&postid=31563#31563
As you can see, task distribution is progressing regularly since the beginning of the week

Thanks in advance for your precious cooperation,
Alessio and Riccardo, for the SixTrack team

2017-07-24: Aborted Work Units

After deleting many really old results from 2013 until March 2017 (was meant to
be December 2016) it seems many Tasks have been aborted. A full analysis
and report will be posted. No action required by volunteers. Eric.

2017-07-18: CMS Jobs working again

It's been a few hours now since the Data Bridge appears to have been fixed and jobs are staging out normally. You can resume running CMS tasks at your will.

2017-07-16: CMS@Home -- please set No New Tasks and perhaps temporarily run another project

There is a problem staging-out CMS@Home jobs to the Data Bridge. Until we find the cause, please set your CMS crunchers to No New Tasks, or temporarily move them to another app or project.
Sorry for the trouble, unfortunately it's beyond my capability to resolve.

2017-07-14: No RESULTS accepted from Linux Kernel 4.8.*

As an emergency measure and over the weekend, I have set
max_results_day to -1 for all hosts running Linux (Ubuntu?)
Kernel 4.8.*. SixTrack is consistently crashing with an IFORT run
time formatted I/O error. This will avoid wasting your valuable
contributions. Eric.

2017-06-26: IMPORTANT, pull back on SixTrack Inconclusive Results

Please see Message 31102 on SixTrack Application,
Inconclusive Results, keyword IMPORTANT. Eric.

2017-06-26: CMS application job queue is being run down.

We want to update the WMAgent job controller, so I've stopped the next batch (I hope). We should run out of jobs in 10-12 hours, so set any machine running CMS tasks to No New Tasks as soon as practicable. Should be up again tomorrow.

2017-06-26: SixTrack Inconclusive Results

Please see the SixTrack Application threads for an important update,
Message 31064, Keyword BANNED

2017-06-24: sixtrack_validator

There will be a (very) short interruption while I
install a new sixtrack_validator. Should fix null/empty
fort.10 and the nasty "outlier" problem.
See SixTrack Application, sixtrack_validator for more news and details.

2017-06-20: SixTrack Tasks distribution issues

Please see Message boards:SixTrack application, thread
"SixTrack Tasks NOT being distributed". This is to have one place for
all relevant messages. This thread is for SixTrack only.
My first post reports my personal status.

2017-06-20: SixTrack Tasks NOT being distributed

Please see Message boards:SixTrack application, thread
"SixTrack Tasks NOT being distributed". This is to have one place for
all relevant messages. This thread is for SixTrack only.
My first post reports my personal status.

2017-06-19: Network and server problems Sunday night

We had a network problem in the computer centre at CERN last night, leading to a number of issues for our servers. BOINC servers should be back in business now.

Normally tasks should be correctly uploaded again on the next attempt. If you see any issues, please try an update or reset of the project.

Sorry for the trouble, and happy crunching!

2017-05-26: SixTrack News - May 2017

The SixTrack team would like to thank all the teams who took part in the 2017 pentathlon hosted by SETI.Germany:
https://www.seti-germany.de/boinc_pentathlon/
where LHC@Home was chosen for the swimming discipline. The pentathlon gave us the possibility of carrying out a vast simulation campaign, with lots of new results generated that we are now analysing. While the LHC experiments send volunteers tasks where data collected by the LHC detectors has to be analysed or Monte Carlo codes for data generation, SixTrack work units probe the dynamics of LHC beams; hence, your computers are running a live model of the LHC in order to explore its potential without actually using real LHC machine time, precious to physics.

Your contribution to our analyses is essential. For instance, we reached ~2.5 MWUs processed in total, with a peak slightly above 400kWUs processed at the same time, and >50TFLOPs, during the entire two weeks of the pentathlon. The pentathlon was also the occasion to verify recent improvements to our software infrastructure. After this valuable experience, we are now concentrating our energies on updating the executables with brand new functionality, extending the range of studies and of supported systems. This implies an even increased dependence on your valuable support.

Thanks a lot to all people involved! We count on your help and committment to science and to LHC@home to pursue the new challenges of beam dynamics which lie ahead.

2017-04-27: LHCb application is in production

We are very happy to announce that the LHCb application is out of beta and is now in production mode on LHC@home. Thank you all for your precious contribution.

We are grateful to have you all as part of our project.

Please, refer to the LHCb application forum for any problem or feedback.

Thanks a lot
Cinzia

2017-04-26: New file server

We have added a new file server for download/upload to scale better with higher load. If there should be errors with download or upload of tasks, please report on the MBs.

Thanks for contributing to LHC@home!

2017-03-22: ATLAS application now in production

The ATLAS application is now in production here on LHC@home, after a period of testing. This marks another milestone for the LHC@home consolidation, and we would like to warmly thank all of you who have contributed to help and tests for the migration!

Please refer to Yeti's checklist for the the ATLAS application and the ATLAS application forum if you need help.

2017-03-14: Network interruptions 15th of March

Due to a network upgrade in the CERN computer centre, connections to LHC@home servers will intermittently time out tomorrow Wednesday morning between 4 and 7am UTC.

BOINC clients will retry later as usual, so this should be mostly transparent.

2017-03-02: VLHCathome project fully migrated

The former vLHCathome project has now been migrated here and the old vLHCathome project site has been redirected.

The credit has also been migrated as discussed in this thread.

If your BOINC client complains about a wrong project URL, please re-attach to this project, LHC@home.

Thanks again to all who contributed to vLHCathome and to those who contribute here!

-- The team

2017-02-17: Draining the CMS job queue

Because of an upgrade to the WMAgent server, we need to drain the CMS job queue. So, I'm not submitting any more batches at present and we should start running out over the weekend. If you see that you are not getting any CMS jobs (not tasks...) please set No New Jobs or stop BOINC.
I expect that the intervention will take place Monday morning, and hopefully we'll have new jobs again later that day.

2017-01-27: Good news for the CMS@Home application

This afternoon we demonstrated the final link in the chain of producing Monte Carlo data for CMS using this project (and the -dev project too, of course), namely the transfer of result files from the temporary Data Bridge storage to a CMS Tier 2 site's storage element (SE). To summarise, the steps are:

o Creating a configuration script defining the process(es) to be simulated
o Submitting a batch of jobs of duration and result-file size suitable for running by volunteers
o Having those jobs picked up by volunteers running BOINC and the CMS@Home application, and the result files returned to the Data Bridge
o Running "merge" jobs on a small cluster at CERN to collect the smaller files into larger files (~2.2 GB) -- this step has to be done at CERN as most volunteers will not have the bandwidth (or data plan!) to handle the data volumes required. This step also serves to a large extent as the verification step required to satisfy CMS of the result files' integrity.
o Transferring the merged files into the Grid environment where they are then readily available to CMS researchers around the world

Thanks, everybody. From here on it gets more political, but we've been garnering support as the project progressed. We now need to move into a more "production" environment and convince central powers-that-be to take over the responsibility of submitting suitable workflows and collecting the results. You will still see some changes in the future, especially as we bring some of the more-advanced features across here from the -dev project.

2017-01-19: MacOS executable OSX 10.10.5 Yosemite

Well I have finally got some work on my Mac with our new MacOS executable
built on OS X 10.10.5 Yosemite .
Please report to me eric.mcintosh@cern.ch,
or to the Topic Sixtrack Application, MacOS executable thread,
if you get some work and there are problems. Eric.

2017-01-08: VM applications broken by the Windows 10 update KB3213522

The Windows 10 update KB3206632 introduces an issue that affects virtualization-based security (VBS) and hence may break VM applications. The issue is fixed in the update KB3213522. If you are running Windows 10, please ensure that you have applied the KB3213522 update.

Thanks everyone who contributed the treads on this issue.

Refs:
Missing heartbeat file errors
Microsoft KB3206632 from 16/12/15

2017-01-08: VM applications broken by the Windows 10 update KB3206632

The Windows 10 update KB3206632 introduces an issue that affects virtualization-based security (VBS) and hence may break VM applications. The issue is fixed in the update KB3213522. If you are running Windows 10, please ensure that you have applied the KB3213522 update.

Thanks everyone who contributed the treads on this issue.

Refs:
Missing heartbeat file errors
Microsoft KB3206632 from 16/12/15

2016-12-25: Season's Greetings

A very Merry Christmas and a Happy New Year to all the LHC@home supporters.
(I shall send some news about our plans for 2017 in the next few days.)
Eric.

2016-11-21: VM applications

Following the Theory simulations added 1 week ago, we have now also deployed the CMS and LHCb applications from the Virtual LHC@home project here on the consolidated, original LHC@home.

Please note that in order to run VM applications in addition to the classic BOINC application Sixtrack, you need to have a 64bit machine with VirtualBox installed and virtualisation extensions (VT-x) enabled. The details are explained on the join us and faq pages on the LHC@home web site.

By default, only the Sixtrack application is enabled in your BOINC project preferences. If you have VirtualBox installed and wish to try VM applications as well, you need to enable other applications in your LHC@home project preferences.

Please note that if you run an older PC with Windows XP or similar, it is recommended to stay with the default; Sixtrack only.

Thanks for your contributions to LHC@home!

--The team

2016-10-06: LHC@home consolidation

As part of consolidation of LHC@home, we have setup a new server web front end using SSL for this project. The new URL is:

https://lhcathome.cern.ch/lhcathome

Please feel free to connect to the new site at your convenience. (BOINC 7.2 clients and later supports SSL.)

The old LHC@home classic site will continue operation as long as required. Currently there are no new Sixtrack tasks in the queue, but soon more applications and work will be available from this project.

2016-07-26: LHC@Home - SixTrack Project News

The members of the SixTrack project from LHC@Home would like to thank all the volunteers who made their CPUs available to us! Your contribution is precious, as in our studies we need to scan a rather large parameter space in order to find the best working points for our machines, and this would be hard to do without the computing power you all offer to us!

Since 2012 we have started performing measurements with beam dedicated to probing what we call the ?dynamic aperture? (DA). This is the region in phase space where particles can move without experiencing a large increase of the amplitude of their motion. For large machines like the LHC this is an essential parameter for granting beam stability and allowing long data taking at the giant LHC detectors. The measurements will be benchmarked against numerical simulations, and this is the point where you play an important role! Currently we are finalising a first simulation campaign and we are in the process of writing up the results in a final document. As a next step we are going to analyse the second half of the measured data, for which a new tracking campaign will be needed. ?so, stay tuned!

Magnets are the main components of an accelerator, and non-linearities in their fields have direct impact on the beam dynamics. The studies we are carrying out with your help are focussed not only on the current operation of the LHC but also on its upgrade, i.e. the High Luminosity LHC (HL-LHC). The design of the new components of the machine is at its final steps, and it is essential to make sure that the quality of the magnetic fields of the newly built components allow to reach the highly demanding goals of the project. Two aspects are mostly relevant:
specifications for field quality of the new magnets. The criterion to assess whether the magnets? filed quality is acceptable is based on the computation of the DA, which should larger than a pre-defined lower bound. The various magnet classes are included in the simulations one by one and the impact on DA is evaluated and the expected field quality is varied until the acceptance criterion of the DA is met.
dynamic aperture under various optics conditions, analysis of non-linear correction system, and optics optimisation are essential steps to determine the field quality goals for the magnet designers, as well as evaluate and optimise the beam performance.
The studies involve accelerator physicists from both CERN and SLAC.




Long story made short, the tracking simulations we perform require significant computer resources, and BOINC is very helpful in carrying out the studies. Thanks a lot for your help!
The SixTrack team




Latest papers:

R. de Maria, M. Giovannozzi, E. McIntosh (CERN), Y. Cai, Y. Nosochkov, M-H. Wang (SLAC), DYNAMIC APERTURE STUDIES FOR THE LHC HIGH LUMINOSITY LATTICE, Presented at IPAC 2015.
Y. Nosochkov, Y. Cai, M-H. Wang (SLAC), S. Fartoukh, M. Giovannozzi, R. de Maria, E. McIntosh (CERN), SPECIFICATION OF FIELD QUALITY IN THE INTERACTION REGION MAGNETS OF THE HIGH LUMINOSITY LHC BASED ON DYNAMIC APERTURE, Presented at IPAC 2014

Latest talks:

Y. Nosochkov, Dynamic Aperture and Field Quality, DOE review of LARP, FNAL, USA, July 2016
Y. Nosochkov , Field Quality and Dynamic Aperture Optimization, LARP HiLumi LHC collaboration meeting, SLAC, USA, May 2016
M. Giovannozzi, Field quality update and recent tracking results, HiLumi LHC LARP annual meeting, CERN, October 2015
Y. Nosochkov, Dynamic Aperture for the Operational Scenario Before Collision, LARP HiLumi LHC collaboration meeting, FNAL, USA, May 2015

2016-03-16: Disk Space Exceeded

I am sorry we have submitted some "bad" WUs.
They are using too much disk space.
Please delete any WUS with names like
wjt-18-L1-trc......
wjt-15-L1-trc.......
Apologies.

2016-02-27: Server daemons temporarily stopped

Due to a problem with an underlying disk server, the BOINC daemons are temporarily shut down until the disk volume is back.

2016-02-02: Short server interruption this morning

Our LHC@home servers will be down for a short while this morning due to a disk server intervention.

2016-02-02: Short server interruption 9-Feb.

Our LHC@home servers will be down for a short while from 8UTC 9-Feb. due to a disk server intervention. (Intervention postponed 1 week.)

2015-12-07: BOINC Server up

The server is back, for the moment at least.
Clearing backlog of results. Eric.

2015-12-06: Server down.

The BOINC server has been stopped temporarily because of
file system problems at CERN. Hopefully to be restarted tomorrow
Monday. Eric.

2015-11-16: Work/result buffering problem at CERN

We have had a BOINC CERN side buffer problem over the weekend.
It is being investigated and hopefully soon corrected. Eric.

2015-11-09: Another short service interruption

The LHC@home servers will be down for a short while from 6:30 UTC Tuesday 10th November for a database update.

2015-09-08: Service interruption tomorrow morning

LHC@home servers will be down for about 1 hour tomorrow morning from 6am UTC, due to an intervention on the database server.

2015-08-24: Server interruption 12 UTC

The BOINC server will be down for maintenance for about 30 minutes from 12:00 UTC today.

BOINC clients will back off and return results later once the server is up as usual.

Many thanks for your contributions to LHC@home!

2015-06-17: Brief Interruption, Thursday 18th June,2015

There will be a hopefully brief interruption to the service tomorrow
Thursday at 10:30 CST to provide separate NFS servers for SixTrack
and Atlas. The WWW pages should still be accessible and a further
message will be posted when the operation is complete. Eric and Nils.

2015-06-11: Project down due to a server issue

Due to a problem with an NFS server backend at CERN, the Sixtrack and ATLAS BOINC projects are down. A fix is underway.

2015-06-06: HostID 10137504 user aqvario

HostID 10137504 owner aqvario.
I set the max_results_day to -1; locking the stable door
after the horse has bolted. For some reason I cannot read the
messages I read this morning on this topic. Thanks for the
help and the Google translation. Eric.

2015-05-17: Quorom of 5, wzero and Pentathlon

I am currently running a set of very important tests to try and
find the cause of a few numerical differences between different platforms
and executables. I could/would not do this usually but because of your efforts
during the Pentathlon I have a unique opportunity. Also keeps up the
workload and gives you all an opportunity to get credits.
These test are wzero with a quorum of 5.Thanks. Eric.

2015-05-16: DISK LIMIT EXCEEDED

Please note that this may occur if you are also subscribed
to the LHC experiment projects ATLAS or CMS using vLHCathome.
A workround is to delete the remaining files yourself.

2015-05-15: New news on the BOINC Pentathlon

Please look at the NEWS 15th May, 2015 for latest update
involving the BOINC Pentathlon. Eric.

2015-05-15: News 15th May, 2015

As many of you know LHC@home has been selected to host
the Sprint event of the BOINC Pentathlon organised by
Seti.Germany. Information can be found at
http://www.seti-germany.de/boinc_pentathlon/22_en_Welcome.html
The event starts at midnight and will last for three days.

This is rather exciting for us and will be a real test of
our BOINC server setup at CERN. Although this is the weekend
following Ascension my colleagues are making a big effort to
submit lots of work, and I am seeing a new record number of active WUs
every time I look. The latest number was over 270,000 and the Sprint
has not yet officially started.

We have done our best to be ready without making any last minute changes
and while this should be fun I must confess to being rather worried
about our infrastructure. We shall see.

We still have our problems, for a year now.

I am having great difficulties building new executables since Windows XP
was deprecated and I am now tring to switch to gfortran on Cygwin.
It would seem to be appropriate to use the free compiler on our
volunteer project.

We are seeing too many null/empty result files. While an empty result can
be valid if the initial conditions for tracking are invalid, I am hoping
to treat these results as invalid. These errors are making it extremely
difficult for me to track down the few real validated but wrong results.
I have seen at least one case where a segment violation occurred, a clear
error, but an empty result was returned. The problem does not seem to
be OS or hardware or case dependent.

I am also working on cleaning the database of ancient WUs. We had not
properly deprecated old versions of executables until very recently.

I am currently using boinctest/sixtracktest to try a SixTrack which will return the full results giving more functionality and also allowing a case to be automatically handled as a series of subcases.

Then we must finally get back MacOS executables, AVX support, etc

Still an enormous amount of production is being carried out successfully
thanks to your support.

I shall say no more until we see how it goes for the next three days. Eric.

2015-04-30: Short stoppage for a disk intervention

The Sixtrack server will be down for a while this afternoon for a disk intervention. Clients will be able to upload results again soon.

2015-04-23: Upgrade of the look and feel of the SixTrack website

The http://lhcathomeclassic.cern.ch/sixtrack/ website has been brought up to date with a new look and feel, which is consistent the other LHC@Home projects. It maintains all the links and the functionality of the previous one.

2015-03-29: Status Result Differences 29th March, 2015

Please have a look at my lates post to:
Number Crunching/Host messing up tons of results. Eric.

2015-02-09: Server Intervention 10-Feb-2014

There will be a short server interruption on Tuesday 10-Feb-2014 from 14:00-15:00 CET for a hardware upgrade.


Update: The upgrade finished at 15:00 and the service is back up.

2015-01-29: Uploads failing

Apologies; disk full problem. Cleaning up and hoping to
return to normal shortly. Thanks for all the messages. Eric.

2014-12-31: News, December, 2014.

Well not much news really. The project is ticking over
and we have processed a tremendous amount of work in 2014.

Right now we are trying to move the project to a new CERN IT
infrastructure so there may be a few hiccups in January
(CERN is closed for two weeks, but systems are up and running).

We are still using executables from May and I still don't have
a valid MacOS executable :-( , no heartbeat so something is really
wrong. Haven't found an explication for the "no permission/cannot acceess"
problems on Windows but the overall error rate is about 1.5% which
seems to be "normal". We have also had problems with the w- WUs
which produced a lot of output, now under control. However running
with a smaller number of pairs ro reduce volume of output seems
to give problems with validation. Working on this.

A New Year, so I shall try and make a big effort to get moving forward
as we have been pretty well stuck for 9 months; after ten years I am
a bit disappointed at the lack of progress. However, as usual, we must
maintain the service as top priority.

I have also noted increased interest from the experiments in using volunteer
computing and this may impact lhcahomeclassic......

Anyway, LHC is heading steadily to restart in the Spring, and we shall
continue studying the High Luminosity upgrade. Many thanks for your
patience and understanding and continued valued support.

A Very Happy New Year. Eric.

2014-12-24: Season's Greetings

I wish you a very Merry Christmas and
a Happ[y|ier] New Year. Thanks for all
your support (news to follow). Eric.

2014-10-31: Heavy I/O on Windows WUs

It sems WUs with names beginning w-.... are creating a bit
much I/O for Windows. Under investigation, but the results
are good and are required. Thanks. Eric.

2014-10-14: 17:00 CET, 15th October, Service back to "normal".

I believe we have finally resolved various issues as
of about 16:00 today. Apologies for the downtime. Eric.

2014-10-10: CERN AFS problems

We seem to be having intermittent? problems with our local
file system. Server running but.....will fix soonest.

2014-10-05: Service back; 5th October

I think we are back in business. Lots of work coming, I hope,
once we sort out the disk space issue. Sorry for all the hassle
and thank you for your continued support.

2014-10-04: Re-enabled daemons

I have painfully cancelled all w-b3 WUs. According to doc they
stay in the database but are marked as "not needed".
I have also disabled further WUs of this type until we sort it out.
Hope to have saved some 65,000 valid WUs. We shall see tomorrow.
Please post to this thread if further problems (I have restarted as root...).
It will probably take some time to get back to normal.
Report will follow in due course.

2014-10-04: Service disabled

I have managed to stem the flood and disable the service.
Apologies and will inform as soon as we are started again.

2014-10-04: Disk Limit increased

I am unable to stop submission.
I have upped the limit on disk space to 500MB.
I can't do anything about active WUs but I hope the new limit
will suffice for new WUs. More news tomorrow.

2014-10-03: Disk Limit exceeded w-b3

Drastic action being taken to delete the download WUs.
This may crash the server....
Apologies for the wasted CPU.

2014-07-22: Power Supply Ripple

Asequesed and for your information Miriam has described her
recent studies as follows:
A principal component of the planned upgrade to a high luminosity LHC (HI-LHC) is the replacement of the high field quadropole magnets - the so called "inner triplet".
The long term beam stability can be significantly reduced by magnetic field errors, miasalignment of the magnets and by irregularities in the power supply (ripple). The recent batch of fifteen or so studies, involving over one and a half million cases or Work Units each of one million turns (for a stable beam), are aimed at determining the maximum allowable tolerances for the power supply ripple assuming the known field and alignment errors.

2014-07-22: More on DOWNLOAD

After running through the w- WUs I am now running
a few test jobs as I think the WUs may have been OK.
I cannot reproduce the problem (of course!) at CERN on my
Windows 7 system. Eric.

2014-07-19: Download Errors located.

ERR_DOWNLOAD problem located and there should be no more once this
batch of dud WUs has been cleared. May be Monday before
I can do anything else. Eric.

2014-07-18: DOWNLOAD ERRORS

Just noticed error rate has doubled to about 6% in
last 24 hours. Seem to be ERR_RESULT_DOWNLOAD which I
have confirmed my checking MBs right now. Any help/detailed
info welcome while I notify CERN support.
(Another Friday afternoon problem!) Eric.

2014-05-22: Three Problems, 22nd May.

Settling down a bit; I am seeing around 2% WU failures.

Problem 1: EXIT_TIME_LIMIT_EXCEEDED. Tried to minimise this
and will hopefully implement "outliers" to avoid it in future.

Problem 2: Can't Create Process and I will look for help on this.
Probably connected with our build but we shall see.

Problem 3: Found 545 invalid results involving 124 hosts.
One invalid result was duplicated! but i am not going to run
everything 3 times. Can live with this. The top 12 culprits gave
77 45 26 25 22 21 19 16 14 11 10 9 invalid results each.
(I thought we stopped using hosts with this many errors......)
Seems to be hardware, overclocking, cosmic rays?????

Getting a lot of production done successfully. Eric.

2014-05-19: Status, 19th May, 2014

Getting a lot of work done, but out of 400,000 WUs over the last seven days
still have about 8000 errors (2% and decreasing I think). The main problem
is EXIT_TIME_LIMIT_EXCEEDED but also "Can't create process". A side effect is
a mess up with credits. I have increased the fpops bound to help, I hope, and
today "reset credit statistics". Please be patient about credits and I shall see
what happens and if we can compensate somehow.
Unfortunately today I discovered a result difference, only one, but I need to
do more checking. I see no invalid results so the former Linux/Windoes
discrepancy is largely resolved. My priority is the integrity of the results
and I may have to spend some days pinning down the result difference,
checking various ifort versions, and doing more checks and tests.
We have a macOS executable under test.
Thank you for your patience, understanding and support. Eric.
(P.S. Getting correct identical results on any PC from a Pentium 3
to the latest, with a multitude of versions of Linux, Windows and macOS
is not easy! I can publish only when the LHC@home service is > 99%.
Afterwards GPU, Android, and 10 million turns)

2014-05-16: CreateProcess problems

I am seeing about 1% CreateProcess problems mainly on Windows 7.
Most often Access Denied (in various languages :-).
Also some Access violation, page out of date or similar.
Found some BOINC mails about this. Under investigation.
Seems to be host dependent.
(More work coming sooon.) Eric.

2014-05-14: LHC@home is back

The service was restarted today and WUs should start
coming in, building up gradually. Thanks to all. Eric.

2014-05-11: First production tests, 11th May, 2014

Trying 590 WUs tonight. If all OK will restart full
production tomorrow 12th May. Eric.

2014-05-10: Status, 10th May

Please see MBs, Number Crunching, Status 10th May, Version 451.07

2014-04-19: WU Submission SUSPENDED 19th April, 2014

In order to avoid any further errors and waste of your valuable
resources I have temporarily stopped WU submission. There are only
a few thousand WUs active and when they are cleared I hope we will have
new Windows executables. Sadly the Windows executables are now giving
wrong results in many cases. I looked at using Homogeneous Redundancy
but I would still get wrong results. I thought of removing the Windows
executables but they are over 80% of our capacity. In this way I hope in
a few days after users and support return from vacation we can safely
introduce new Windows executables after tests using the BOINC test
facility. Sorry about that but I would rather get it fixed properly as we
have lots of new work coming.

Thankyou for your patience and support. Eric.

2014-03-16: Status, March 2014

First, in reply to a recent query about 2014 workload, thanks to Msssimo:
"The majority of the 2014 studies will be devoted to LHC upgrade and the rest to understand the nominal
machine. I do not expect any increase in workload when approaching the LHC re-start in 2015, on the
other hand, we will all be locked up in the control room and the resources for performing the
simulations will be reduced."

Second, we have been experiencing major problems with our
Windows executables for several months now.
There are "small" result differences between Windows and Linux.
After extensive testing I believe they are due to the Windows
ifort compiler. This will be verified and fixed as soon as I
return to CERN next week. In addition new builds of SixTrack
for Windows, which now include a call boinc_unzip, are failing
on Windows in at least two ways; there is a problem parsing the
hardware description (/proc/cpuinfo on Linux) and secondly we
get "cannot Create Process" errors. So, we shall first try and
build without the hopefully resposible call, and fix the result
differences. We can then resume development of the case splitting
to smaller WUs and the return of all results.

It is great that your support continues and, when required, we have
lots of capacity. Saw a new record of over 140,000 WUs in
process a couple of weeks ago. Eric.

2014-01-24: Status, 24th January, 2014

Hope this will answer some of your messages.

We still have some 34,000 WUs NOT being taken. We have apparently
almost 6000 in progress.

We introduced SixTrack Version 4.5.03 on Wednesday 22nd
January after extensive testing on boinctest and at CERN.
Unluckily Yuri flooded us with work at the same time
and AFS blew up leading to a huge backlog of over 16,000
results to be downloaded.

1. Results Validation;seems to be OK. I summarise that,
countimg from 0-59 we do NOT CHECK Words 51, 59? and 60
in fort.10.

The validator log shows many many "cannot open" supposedly
existing results for comparison. They were probably lost
somehow.

2. Assimilation; the log shows
"Herror too many total results" !!!
There are about 2000 (1979) unique messages and cases/WUs.
I suspect we may nedd to clean the database and remove results
(with clients losing credit I am afraid, but they will probably never
get credit for these anyway).
I could delete them from upload but that would probably be worse.

3. Scheduler log: there are about 2.4 million messages of which
there are 1.64M unrecognised messages, multiple messages per WU.
This is perhaps significant!
previously these messages existed only for Macs as far as I can see.
here is one case:
2014-01-22 17:24:41.1073 [PID=51877] HOST::parse(): unrecognized: opencl_cpu_prop
2014-01-22 17:24:41.1075 [PID=51877] HOST::parse(): unrecognized: platform_vendor
2014-01-22 17:24:41.1075 [PID=51877] HOST::parse(): unrecognized: Advanced Micro Devices, Inc.
2014-01-22 17:24:41.1075 [PID=51877] HOST::parse(): unrecognized: /platform_vendor
2014-01-22 17:24:41.1075 [PID=51877] HOST::parse(): unrecognized: opencl_cpu_info
2014-01-22 17:24:41.1075 [PID=51877] HOST::parse(): unrecognized: name
2014-01-22 17:24:41.1075 [PID=51877] HOST::parse(): unrecognized: Intel(R) Core(TM) i7-4770K CPU @ 3.50GHz
2014-01-22 17:24:41.1075 [PID=51877] HOST::parse(): unrecognized: /name
2014-01-22 17:24:41.1076 [PID=51877] HOST::parse(): unrecognized: vendor
2014-01-22 17:24:41.1076 [PID=51877] HOST::parse(): unrecognized: GenuineIntel
2014-01-22 17:24:41.1076 [PID=51877] HOST::parse(): unrecognized: /vendor
2014-01-22 17:24:41.1076 [PID=51877] HOST::parse(): unrecognized: vendor_id
2014-01-22 17:24:41.1076 [PID=51877] HOST::parse(): unrecognized: 4098
2014-01-22 17:24:41.1076 [PID=51877] HOST::parse(): unrecognized: /vendor_id
2014-01-22 17:24:41.1076 [PID=51877] HOST::parse(): unrecognized: available
2014-01-22 17:24:41.1076 [PID=51877] HOST::parse(): unrecognized: 1
2014-01-22 17:24:41.1076 [PID=51877] HOST::parse(): unrecognized: /available
2014-01-22 17:24:41.1076 [PID=51877] HOST::parse(): unrecognized: half_fp_config
2014-01-22 17:24:41.1076 [PID=51877] HOST::parse(): unrecognized: 0
2014-01-22 17:24:41.1076 [PID=51877] HOST::parse(): unrecognized: /half_fp_config
2014-01-22 17:24:41.1077 [PID=51877] HOST::parse(): unrecognized: single_fp_config
2014-01-22 17:24:41.1077 [PID=51877] HOST::parse(): unrecognized: 191
2014-01-22 17:24:41.1077 [PID=51877] HOST::parse(): unrecognized: /single_fp_config
2014-01-22 17:24:41.1077 [PID=51877] HOST::parse(): unrecognized: double_fp_config
2014-01-22 17:24:41.1077 [PID=51877] HOST::parse(): unrecognized: 63
2014-01-22 17:24:41.1077 [PID=51877] HOST::parse(): unrecognized: /double_fp_config
2014-01-22 17:24:41.1077 [PID=51877] HOST::parse(): unrecognized: endian_little
2014-01-22 17:24:41.1077 [PID=51877] HOST::parse(): unrecognized: 1
2014-01-22 17:24:41.1077 [PID=51877] HOST::parse(): unrecognized: /endian_little
2014-01-22 17:24:41.1077 [PID=51877] HOST::parse(): unrecognized: execution_capabilities
2014-01-22 17:24:41.1078 [PID=51877] HOST::parse(): unrecognized: 3
2014-01-22 17:24:41.1078 [PID=51877] HOST::parse(): unrecognized: /execution_capabilities
2014-01-22 17:24:41.1078 [PID=51877] HOST::parse(): unrecognized: extensions
2014-01-22 17:24:41.1078 [PID=51877] HOST::parse(): unrecognized: cl_khr_fp64 cl_amd_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_kh
2014-01-22 17:24:41.1078 [PID=51877] HOST::parse(): unrecognized: /extensions
2014-01-22 17:24:41.1153 [PID=51877] HOST::parse(): unrecognized: global_mem_size
2014-01-22 17:24:41.1153 [PID=51877] HOST::parse(): unrecognized: 17029206016
2014-01-22 17:24:41.1153 [PID=51877] HOST::parse(): unrecognized: /global_mem_size
2014-01-22 17:24:41.1153 [PID=51877] HOST::parse(): unrecognized: local_mem_size
2014-01-22 17:24:41.1153 [PID=51877] HOST::parse(): unrecognized: 32768
2014-01-22 17:24:41.1153 [PID=51877] HOST::parse(): unrecognized: /local_mem_size
2014-01-22 17:24:41.1153 [PID=51877] HOST::parse(): unrecognized: max_clock_frequency
2014-01-22 17:24:41.1154 [PID=51877] HOST::parse(): unrecognized: 3500
2014-01-22 17:24:41.1154 [PID=51877] HOST::parse(): unrecognized: /max_clock_frequency
2014-01-22 17:24:41.1154 [PID=51877] HOST::parse(): unrecognized: max_compute_units
2014-01-22 17:24:41.1154 [PID=51877] HOST::parse(): unrecognized: 8
2014-01-22 17:24:41.1154 [PID=51877] HOST::parse(): unrecognized: /max_compute_units
2014-01-22 17:24:41.1154 [PID=51877] HOST::parse(): unrecognized: opencl_platform_version
2014-01-22 17:24:41.1155 [PID=51877] HOST::parse(): unrecognized: OpenCL 1.2 AMD-APP (1348.5)
2014-01-22 17:24:41.1155 [PID=51877] HOST::parse(): unrecognized: /opencl_platform_version
2014-01-22 17:24:41.1155 [PID=51877] HOST::parse(): unrecognized: opencl_device_version
2014-01-22 17:24:41.1155 [PID=51877] HOST::parse(): unrecognized: OpenCL 1.2 AMD-APP (1348.5)
2014-01-22 17:24:41.1155 [PID=51877] HOST::parse(): unrecognized: /opencl_device_version
2014-01-22 17:24:41.1155 [PID=51877] HOST::parse(): unrecognized: opencl_driver_version
2014-01-22 17:24:41.1155 [PID=51877] HOST::parse(): unrecognized: 1348.5 (sse2,avx)
2014-01-22 17:24:41.1155 [PID=51877] HOST::parse(): unrecognized: /opencl_driver_version
2014-01-22 17:24:41.1155 [PID=51877] HOST::parse(): unrecognized: /opencl_cpu_info
2014-01-22 17:24:41.1156 [PID=51877] HOST::parse(): unrecognized: /opencl_cpu_prop
2014-01-22 17:24:41.3583 [PID=51877] Request: [USER#221474] [HOST#10137513] [IP 69.35.195.242] client 7.2.33
2014-01-22 17:24:41.3880 [PID=51877] Sending reply to [HOST#10137513]: 0 results, delay req 6.00
2014-01-22 17:24:41.3880 [PID=51877] Scheduler ran 0.035 seconds

I am not an expert but it seems to me it might explain work not being taken.......
(but never saw this with boinctest!).

Other issue; one client reports "Cannot Create Process" mon Windows 7.
May or may not be significant.

Are executables 'signed" OK?

So all a bit complicated but hope to sort it (very) soon.
Eric.

2014-01-23: Hiccup, today 23rd January

Apologies for an interruption to service.
Working on it. More news when corrected.
Eric.

2013-11-19: Publications Update

The WWW page
http://lhcathome.web.cern.ch/sixtrack/sixtrack-and-numerical-simulations
has been updated by Massimo with new recent publications concerning LHC@home.

2013-11-19: News Status and Plans 19th November, 2013

Please see the MB Number Crunching for an update. Eric.

2013-10-24: Problem October 23rd Fixed

The permissions on the directory for the logs was wrong.
Corrected and results being uploaded. A fuller report and
a new Status and Plans will be issued soonest.

2013-10-23: Problems 23rd October, 2013

Sorry for the upload problems. Hope somebody here will
fix this soon. (I thought we had a new record number
of WUs in progress! :-) Eric.

2013-09-13: Status, 13th September, 2013

Still fighting to produce a good set of Linux executables.
Lots of work for Windows systems!
Created some notes on Numerical reproducibility
[url=http://cern.ch/mcintosh]CV and Notes on Floating-Point[url].

2013-09-06: Status 6th September

New thread as feedback is in several others.
I have resolved server out of space for the short term and
we will implement a proper fix soonest.

Issue remains with Linux executables I think. I have checked and
informed my colelagues. The ".exe" suffix is confusing but the pni
executables look OK (crash on my test machine without pni of
course, but OK on my modern one).We do not hae a MAC executable
yet.

Now things have settled down we pursue an analysis of the problem(s).
I do not want to go back because we urgently need the new physics in
this version.

Thanks for your patience and undersatnding Getting lots of results
anyway. Eric.

2013-09-04: New SixTrack

SixTrack CERN Version 4463 is now in production.

2013-09-02: Testing

Just running "last" tests. Hope to have new SixTrack tomorrow.

2013-09-01: Short Failing Work Units

We are tyring to use the test option of BOINC SixTrack project.
The very short WUs are failing. We have a fix and shall try agian
soon. More production to follow. Thanks for your patience.
Eric.

2013-09-01: Short Failing Work Units

We are tyring to use the test option of BOINC SixTrack project.
The very short WUs are failing. We have a fix and shall try agian
soon. More production to follow. Thanks for your patience.
Eric.

2013-08-30: Staus and Plans, 30th August, 2013

Please see Message Boards: Number Crunching: Status and Plans 20th August, 2013
(Sorry about date!). Eric.

2013-05-25: May, 2013 update.

Server down (temporarily I hope). Trying to fix the "unzip" problem.
See my recent posts to Number Crunching: Status and Plans May 25th,
and Results Discrepancies for more info. Eric.

2013-05-08: More work coming now.

We have introduced a new SixTrack Version 4446 and I am resuming
production on an intensity scan as well as running more tests; usual
mixture of short/long run times. We are also trying to return more
results files to help identify problems. Thanks for your help as usual.
Eric.

2013-03-15: Dynamic Aperture Tune Scan

Hello everybody,
after some few technical problem in the last few days, we are now ready to submit a first Tune Scan for the Dynamic Aperture study we are performing at CERN.
This simulations will give us a first hint on how the HighLuminosity upgrade for the LHC will work, and in particular the effect of the Beam-Beam interaction will be analysed.
This will be only the first bunch of simulations, because various scenario are possible for this upgrade, and we need to deeply investigate each one of them to decide which one is the one that better fit our requirements...so keep you machine ready to crunch!!

2013-03-10: Interruption for server update

There will be a short server interruption today for a software update. New jobs should come later once we have checked the software chain.

The update is now done. Thanks for your contributions and have a nice day!

2013-02-15: Forum restrictions

Due to spam activity, all forums apart from Questions & Answers: Getting Started now requires some BOINC credit to allow posting. If you are a complete newcomer, please check existing Questions & Answers first.

The team.

2013-02-08: Pause

There will be a pause for a week or two.
See the News (no ) "More work" thread for more info.

2013-02-03: More work

Can't keep up but more work coming now.

2013-01-31: Production 2013

Great; as you will have seen running flat out on intensity scans, one million turns max.
Over 100,000 tasks running! CERN side infrastructure is creaking at the seams.
Will run down in a week or two to introduce a new SixTrack version (with suitable
warning).

2013-01-11: First tests 2013

Trying to run a few thousand cases from Scinetific Linux 6 (SLC6)
here at CERN. Eric.

2013-01-06: A Happy New Year

Thanks for all the support in 2012 (and before). Further delay due to a Power Cut
PC broken and the CERN annual closure for two weeks. Once again more detailed
information when I have recovered. So a Happy New Year and I am hoping for
an even better 2013.

2012-11-28: Problems/Status 28th November, 2012 and PAUSE

Discovered some problems with result replication! and run out of
disk space at CERN. There will be a pause, for a few days at least,
while I investigate and resolve. (Wil post details soonest to the
MB Number Crunching.) Eric

2012-11-15: Status, Thursday 15th November

Hiccup; mea culpa. On vacation and travelling since Tuesday
and ran out of disk space in BOINC buffer at CERN :-(
I think all is OK again now after corrective actions and more work
is on the way. Sorry about that. Eric.

2012-11-04: Status and Plans, Sunday 4th November

First service continues to run well; the first intensity scan is nearing completion with well over a million results in 15 studies successfully returned. Just a couple of hundred thousand more!
(Sadly no one study is complete but a couple are very close and I shall start post-processing and analysis soon. I am still reflecting on the thread "Number crunching; WU not being sent to another user".
This is not easy, trying to get studies complete, but keeping the system busy. I am the "feeder" and since in the end I need all the studies I am rather prioritising keeping WUs available.)

Just checked and we have over 80,000, yes eighty thousand WUs active and this is a new (recent) record.

Draft documentation of the User side is now available thanks to my colleague R. Demaria. If you are interested
[url=SixDesk Doc]http://sixtrack-ng.web.cern.ch/sixtrack-ng/[/url]
and I hope you can access it (otherwise I shall put a copy to LHC@home).

Right now I hope to try new executables with new physics on our test server and I mght shortly appeal for some volunteers to help (and also to run a few more 10 million turn jobs). I do NOT want to risk the production service while it is running so smoothly.

Otherwise (At Last!) I shall start writing my paper on how to get identical results on ANY IEEE 754 hardware with ANY standard compiler
at ANY level of Optimisation. Thanks to all. Eric.

2012-09-29: Status and Plans, Saurday 29th September, 2012

All running very smoothly indeed. Just a problem with deadline scheduling which I hope we can discuss and resolve on Monday, especially with some feedback from the BOINC meeting in London.
Also some hiccups on the CERN AFS infrastructure.
I am now hoping to prioritise the writing of my paper on numeric results reproducibility but I am continuing to run work for the next weeks as described in my new thread "Work Unit Description"
in the Message Board "Number Crunching".
I am also pondering how to best handle "very long"
jobs bearing in mind your feedback.
And of course I shall try and keep you informed.

Thankyou for your continued support. Eric.

2012-09-29: Status and Plans, Saturday 29th September, 2012

All running very smoothly indeed. Just a problem with deadline scheduling which I hope we can discuss and resolve on Monday, especially with some feedback from the BOINC meeting in London.
Also some hiccups on the CERN AFS infrastructure.
I am now hoping to prioritise the writing of my paper on numeric results reproducibility but I am continuing to run work for the next weeks as described in my new thread "Work Unit Description"
in the Message Board "Number Crunching".
I am also pondering how to best handle "very long"
jobs bearing in mind your feedback.
And of course I shall try and keep you informed.

Thankyou for your continued support. Eric.

2012-09-09: Status, Sunday 9th September, 2012.

All running well still. One user reports "Maximum Elapsed Time Exceeded" though
on several, all? of his, WUs.
Still checking for MacOS results but no
further complaints at the moment.

I present some basic info.

There have been several changes to URLs and Servers outwith my control. The correct site is http:lhcathomeclassic.cern.ch/sixtrack/
This can indeed be found easily from LHC@home and then The Sixtrack Project (rather than Test4Theory). The current server is boinc05@cern.ch.

I define "normal" WUs as 10**5/100,000 turns but remember all particles may be lost after an arbitrary number of turns, sometimes, even just a few turns at large amplitudes.
Long WUs are 10**6 or one million turns and very Long WUs
10**7 or 10 million turns, and who knows maybe one day 10**8 turns.
That depends on how the floating-point error accumulates and at which point the loss/increase of energy and loss of symplecticity invalidate the results. It will be exciting to find out.

For Functionality, Reliability and Performance.
While waiting for the LXTRACK user node and the second server for test and backup (I assume they will finally get approved!):

Functionality; adequate for the moment. It would be good to have a priority system, three levels.
1. Run first, after other Level 1.
2. Normal; queue after Level 1 and before Level 3.
3. Run only if No Level 1/2 tasks queued.

I am thinking in terms of running 10**7 jobs as a series of 10**6 jobs. This requires returning and submitting more data, the fort.6 output and the checkpoint/restart files as a minimum. This would be very good additional functionality in itself.

Reliability; pretty good but needs the backup server, LXTRACK, and less reliance on CERN AFS..
Should provide a quick test (1 or 2 minutes) to verify the node produces correct results without running the whole WU. This would not obviate result validation but would avoid wasting resources.
I could also provide a longer test on the WWW with canonical results that any volunteer could run if he suspects he has over-clocked or is getting results rejected.

Performance; pretty good now with SSE2, SSSE3, PNI or whatever.
Should implement GPU option. Should measure the cost of the numeric portability.
(Incidentally Intel are hosting a Webinar on this topic on Wednesday, but I guess it will address only Intel H/W.)

2012-09-02: Status, 2nd September, 2012

Well all seems to be running rather well as seen from the
CERN side. So I present the topics for review on Tuesday.
1. IT report on LXTRACK proposal (to greatly improve facilities for the
physicists including more disk space and much improved reliability).
2. Proposal for a second "test" server (to test very long jobs, to try returnig
the full results, without affecting the current service).
3. Project Status and open issues from the MBs:
a) More buffered work (user request).
b) Access to boinc01! Apparently some attempts to contact this obsolete service.
Could be WWW pointers or what.
c) HTTP problems, one user? (I need to send byte count and MD5 checksum.)
d) MacOS executable. Open issue; works for some people.
e) Deadline scheduling Seems that work is deleted because volunteers fear their
contribution will be wasted. But is this true? I have 99.999% results OK but how many
WUs were not credited............
f) GPU enabled SixTrack
4. A.O.B. including Date and time for a small party and the invitation list
to celebrate recent progress and the many helpful comments and suggestions.

2012-08-26: Status, 26th August, 2012

MacOS executable is working, for some at least.
I have queued 500,000 jobs, intensity scan,
while I clear the decks. Many thanks for all the
suggestions and comments on (very) long jobs.

2012-08-22: Very long jobs

I am now going to submit just a few hundred very
log 10**7 turn jobs to complete two studies.
I think this will be OK now; we shall see.

2012-08-20: Credits

Please see the Message Board Number Crunching, Thread Credits for some
hopefully good news from Igor.

2012-08-19: Status, 19th August, 2012.

All is running rather well; over 100,000 tasks queued, and over 56,000 running. I have a bit more work prepared, but badly need to do some analysis. After some flak, we have been receiving many messages of support and also a lot of help in identifying the problem with the MAC executable.

Igor has identified and corrected the problem with Credits and is still cleaning up and trying to repair.
(This was my fault; trying to run 10**7 turn jobs taking 80 hours.
However I can report that 99% of them have completed successfully,
and others are still active.)

The Mac executable issue may even be solved, but we need to watch for the next days still.

There may be a problem with Deadlines....we shall see.

I am waiting for PC support to install my NVIDIA TESLA, memory and upgraded power supply, and Linux. I am ready to install the software next and try Tomography. There is some interest in ABP especially for existing MPI applications. We shall see.

I have STILL NOT finished the SixDesk doc or prepared the tutorial.

I take this opportunity to outline the LXTRACK system: I hope IT support could fill in the details and do it.

The justification is that AFS limitations and problems have made life very difficult.
I have used my desk side pcslux99 (thanks to Frank who donated it) as a protoptype to run several hundred thousand jobs over the last few weeks.
Sadly I do not have the LSF commands like bjobs and bsub, as it as an old 32-bit machine, and I am NOT wanting to become a sysadmin again. It has almost 200GB of disk space of which I am using only 12% but increasing. Under this setup I have virtually no problems and do everything with the SixDesk scripts called from master scripts in acrontab entries.

LXTRACK should be a "standard" lxplus Virtual machine i.e. with LSF and CASTOR and SVN and AFS etc etc. BUT with at least a Terabyte of disk space NON AFS, /data, say. Only users in the AFS PTS Group boinc_users should be allowed to login.
(We could even create the /data/$LOGNAME directory for them.) How can we manage this space? Given the small number of cooperative users a script to monitor is probably adequate.
Processes shoul NOT be killed for exceeding CPU or real time limits.
Later, ideally, we could possibly create non_AFS buffers for communication with BOINC.

2012-08-16: MacOS Executable

(Re-)activated MacOS executable built on MacBook PRO.
Will be watching closely for errors. Eric and Igor.

2012-08-12: Status 12th August

All is running rather well from CERN side and I have initiated an intensity scan to run while I work a bit on the GPU. I have a real time deadline and I
must try this over the next two weeks. In spite of a couple of issues
with the CERN infrastructure I have still managed to queue over 90,000 Work Units as part of an Intensity scan (different bunch sizes and charge).

We are getting flak about credits or points. One obscene message I tried to hide, but the user said he got only
200 points for 80 hours when he expected at least 1000, and another user 62.70 points for 110 hours. So we lost a couple of volunteers, but we are also getting support with over 40,000 active Work Units.

There is also an issue with the real time deadline for my 10 million turn jobs.

I hope to fix the MAC executable next week with my colleague.

2012-08-12: Status, 12th August

Please see the NEWS Message Board.

2012-08-07: Status/Plans, 7th August 2012

First, many thanks for your continued support. From my/CERN side all has been running rather well and I am submerged by results.
I now need to take some time to analyse them. In particular to decide between the two methods of computing the beam - beam effect.
Then I shall probably submit several studies to do an intensity scan where I study the beam - beam effect depending on the size, and hence charge, of the accelerated bunch of particles.

At the same time, I must finish the documentation of the "user"
infrastructure so that my colleagues may easily use BOINC as they return from vacation. In addition I want to set up a dedicated "user" system "lxtrack" in order to provide disk space here and to try and keep up with the results as they are returned.

I have to look at the Deadline problem for 10**7 turn jobs.
I set a bound of 30 days for any WU....need to discuss with Igor is that is NOT what you see at home. Of course we really want a low bound to get results back quickly, but I also want to use older slower systems. We shall have to work out some sort of compromise. My attempt as 10**7 turns was probably a bit over the top, but I was keen to try it.

We hope/expect to produce a valid MAC executable this week. I also need to add some new "physics", new elements, to Sixtrack as provided by a colleague. (Also need to add modifications for "Collimation" but they are not relevant to BOINC.)
The next version should also support SSE4.1.

I was very pleasantly surprised to win an NVIDIA TESLA C2075.
The catch is that I have to use it and program it with OpenACC. There will doubtless be some hiccups intsalling the board and the necessary
(PGI) software. I shall in fact try my "Tomography" application which already runs in parallel using HPF or openMP. If that works I shall seriously consider a multi-threaded Sixtrack (using GPUs or not) by tracking many more particles in each Work Unit. Non-trivial but rather exciting. I am just at the ideas stage here, but.....it would of course use multiple threads on a multiple core PC as well. A dream?

Finally, I have to take time to publish my work on floating-point portability and reproducibility. I believe I might be the only person who gets identical bit for bit 0 ULP different results after many Gigaflops with 5 different Fortran compilers at different levels of optimisation.

2012-07-16: MAC executable

STOP PRESS: Trying a new prototype executable for MACs.
Built with ifort defaults on a macBook Pro (using sse3 I guess).
Eric and Igor.

2012-07-11: Server status

My colleague has cleaned database and I think that is the end of http errors etc etc.
I have submitted new work and I am always getting results anyway. There is still a whole
bag of worms around sse2 sse3 ssse3 pni and whatever, not helped by Intel's ifort
refusal to run optimised code on non-Intel hardware.

Igor has much improved version distribution and some people are getting "PNI"
versions. The important thing is that SSE2 upwards is much faster than the generic
version. Don't want to waste resources. All versions are completely numerically
portable (I hope so) but when panic is over I shall be looking at all rejected results
as I believe they are due to hardware failures (over-clocked?).

If all goes well I shall try and issue an update to whatever happened to lhc@home
this weekend.

In the meantime someone has changed the WWW pages, or whatever and I don't even know if you
can read this. All my bookmarks failed and usual start page NOT available.

Eric (from his new super MAC notebook pro, bought at great personal expense,
but have never had the time to set up. I am going to try and install BOINC now.)

2012-07-04: Server/Executable problems

An exciting day; a new particle and maybe even the Higgs boson itself.

We have been busy preparing new executables for BOINC, including a MAC
executable.

Sadly we have run out of disk space and there are likely to be some hiccups
for the next few hours, hopefully not longer. We have three new executables for
both Windows and Linux: run anywhere, use SSE2, use SSE3. The run anywhere is
slow but every little helps. The executable for MAC requires at least SSE3 I
believe and the exact requirements are not well understood as I write.
I am currently running tests on as many types of hardware as I can.

The disk full situation can cause havoc and certainly explains why you have
not been able to get more work for the last hours.
More news as soon as we make some progress.

Thanks for your continued support which will to make an even better
LHC for 2015. Eric.

2012-06-05: Sixtrack server migration today

Dear volunteers,

The Sixtrack BOINC project has been migrated to a new server today. If you should encounter any difficulties with the setup, please detach from the project and attach again.

BOINC and Sixtrack should be fully operational again from 2PM CET. (12:00 UTC)


Best regards, the BOINC service team.

2012-04-04: Project News April 04, 2012, 14:30 GMT

Forum under maintenance. Any updates maybe lost

2011-09-16: Project News September 16, 2011, 22:30 GMT

Added Server Status link from this page

2011-09-15: Detach and attach again if you have problems

Please detach from the project and attach again to the project URL on this page if you get scheduler errors. By mistake, a reference to our old server had been cached on the Boinc client side, and therefore it may be necessary to re-initialise clients following a server move at CERN. Sorry for this glitch.

2011-09-15: Problem with job scheduling.

Scheduler problems on new server

2011-09-02: Long Live LHC!

RESTART OF OPERATION @ CERN

2011-08-23: Announcement

Move the activity LHC@HOME to CERN

2011-08-19: Project News 19.08.2011 15:35 BST

Migration of LHC@home to CERN
Since May 2007 LHC@home has been hosted at Queen Mary, University of
London. One thing that is certain in life is change and it is now time
for LHC@home to move back to CERN the home of the LHC experiment.
Next week 22nd to 26th August we will be migrating the project back to
CERN and so please expect the project to be down for a few days from
Monday.
We would like to thank you for your support over the last four years and
hope you will continue to subscribe to LHC@home when it is back at
CERN. We also wish the new CERN boinc admin's all the best as they take
the
reins.
Happy number crunching!
Alex and Neasan

2011-08-05: Project News 05.08.2011 11:00 BST

LHC@home 2.0 launched
We are announcing the start of beta testing of the first
second-generation LHC@home project.
It is the first of what is expected to be a series of physics
applications running on the
LHC@home 2.0
Volunteer Cloud platform.
This is a test project, to demonstrate the use of virtual machines using
the CERN-developed
CernVM
technology inside BOINC systems to perform
fully-fledged LHC physics event simulation on volunteer computers.
All LHC@home volunteers are welcome to join the beta testing, which will
initially accept about 1000 active helpers. Full information for joining
is given at:
LHC@home 2.0
The LHC@home 2.0 team

2011-06-20: Project News 20.06.2011 16:00 BST

Project Registration Suspended
Due to technical problems registration to LHC@home has been indefinatly suspended.
Sorry about this.
We are working on an upgrade which will let us re-enable the account creation but the timeline is not yet completely clear.
Alex

2011-06-17: Project News 17.06.2011 21:45 BST

Project registration closed
Due to technical problems registration to LHC at home has ben indefinatly suspended.
Sorry about this.
Alex

2010-05-25: Project News 25.05.2010 14:00 BST

Server Down Tonight
Due to essentiall electrical work the LHC@HOME server
will suffer network inturruptions this evening. The server will
remain on but the network links will go down and we hope will come
back up! I expect the work to take about 2 hours and happen sometime
between 18:00 BST Tuesday to 06:00 BST Wednesday.
Sorry about this.
Alex

2010-05-12: Project News 12.05.2010 14:00 BST

Server Down Tonight
Due to essentiall electrical work the LHC@HOME server will be switched off at 17:00 today 2010-05-12 and switched back on at
10:00 tomorrow 2010-05-13. Sorry about this!
Alex

2010-03-30: Project News 30.03.2010 12:00 BST

We have just had collisions at 7TeV in the LHC.
Follow the progress at CERN on
Twitter
Later days,
Neasan

2010-03-24: Project News 24.03.2010 15:25 GMT

As some will have heard the LHC will be trying to collide particles at
7TeV on the 30th of March. This means to beams circulating at 3.5TeV
being collided. They have had the beams at this energy level since the
19th of March and have been fine tuning operations at this level since
then. Once the team have 7TeV collisions the machine will run at this
level for 2 years. It will then stop have some maintenace/upgrades and
hopefully move to 14TeV collisions after that.
Later days,
Neasan

2010-03-18: Project News 18.03.2010 15:25 GMT

There is a bug in the current version of SixTrack relating to
viewing the screensaver/graphics. Attempting to run the visuals will
kill the work unit. Obviously we know about the issue and we are
working on a solution until then please don't touch the 'show graphics
button' or let your screensaver try to use the LHC@home
visuals.
Thanks, Neasan
Update: May 10th we installed sixtrack version 4209 which we hope
solves
this. Alex

2010-03-04: Project News 04.03.2010 12:10 GMT

Two announcements:
1) The LHC is back up and running you can follow its progress on the
CERN twitter feed @CERN or
the @lhcstatus feed.
The latter is actually a scrape of the screens that the physicists watch
to monitor the LHC machine.
2) LHC@home is also back. After a long quiet period we have new
studies to work on. The executable has been updated (which is why there
is not a Linux version available yet) and the project is now crunching
simulations for the LHC upgrade. The team at CERN are working on the
Linux executable and the team at QMUL are looking at the server code and
are hoping to update that soon.
This is all I will say for the moment. However LHC@home is being
used for beam studies from the LHC machine running actual data from the
experiments is unlikely as the code has been written for the grid and
the datasets are measured in 100s of gigabytes. We
have been looking into other LHC uses for the project but it is not
easy.
Later days,
Neasan

2010-01-04: Project News 04.01.2010 11:45 GMT

The server will be down from 5pm GMT on the 8th of January and should be back up and running by 11am on 11th of January.
Sorry for the interruption there will be no power in the department over the weekend.

2009-11-06: Project News 06.11.2009 10:45 GMT

You couldn't make it up:
LHC 'bird-bread' strike
On Tuesday 3 November, a bird carrying a baguette bread caused a short
circuit in an electrical outdoor installation that serves sectors 7-8
and 8-1 of the LHC. The knock-on effects included an interruption to the
operation of the LHC cryogenics system. The bird escaped unharmed but
lost its bread.
The standard failsafe systems came into operation and after the cause
was identified, re-cooling of the machine began and the sectors were
back at operating temperature last night. The incident was similar in
effect to a standard power cut, for which the machine protection systems
are very well prepared.

2009-08-20: Project News 20.08.2009 10:45 GMT

The LHC has restarted and has been circulating beams for three days now.
To keep up to date on all the latest news follow the CERN twitter feed
at twitter.com/cern

2008-11-03: Project News 3.11.2008 09:55 GMT

Today I have installed a new application "sixtrackbnl". This is simply an updated version of
the sixtrack executable. We only have this available for windows at the moment but as time permits we
will try to produce a linux build. This has been installed as a new application name as we wish to be
able to send jobs to either the old or new version and compare results. In due course we may well
rename the new executable to sixtrack and carry on as before.
Expect some test jobs on this new application to begin with. We will try to post some more
information regarding the studies being performed when we get some production jobs running on it.
As ever thanks for your contributions.
Alex

2008-09-29: Project News 29.09.2008 15:37 BST

As you've probably all heard from the mainstream media, there was a helium leak from the cooling system for the LHC
superconducting dipole magnets, that will result in several months down time to do repairs.
It's an unfortunate setback, but not entirely surprising: with a machine of this complexity, there are bound to be a
few snags at the beginning. As Lyn Evans, the leader of the LHC project, has put it, "the LHC is its own prototype".
The really unfortunate thing is that it takes so long to warm up the magnets to do repairs and cool them down again.

Despite this setback, LHC@home will keep chugging along regardless, to help ensure the machine runs as smoothly as
possible once the repairs are done, and to start doing research for future upgrades of the machine, planned in a few
years' time. So your contribution is still very much valued!

Thanks,
Neasan

2008-09-18: Project News 18.09.2008 09:30 BST

OK last week was a bit busy with conferences and switch on events.
Hope you enjoyed the blow by blow account from our man on the ground.
The server here got hammered for the entire week as BBC4 had a link to us but we hope everyone was able to get the work that was available.

Switch on itself went brilliantly and this was a direct result of LHC@home.
If you had not been crunching data there is a good chance that the two beams would not have been circulated that quickly and they could have spent ages just getting beam thorugh one sector.
We are still waiting on collisions, there was a chance that we'd see them yesterday but looks like it is next week now.
There is still plenty of work needed on the magnets as they ramp up to full luminosity so there will be plenty of SixTrack work from CERN.
Fingers crossed, and yes I know I keep saying this but scientists can work slowly, meticulously, but slowly we will have some more apps running.
The scientists are looking into the 6.X issues with BOINC as well.
Thanks,
Neasan

2008-09-10: Project News 10.09.2008 15:23 BST

Live from CERN - News from the guy wearing an LHC@home t-shirt
Latest rumours are that there could be colliding beams (at low energy, mind you)
within 24 hours (Steve Myers) or at least within a week (Paul Collier). Wow!!!

The guy with the LHC@home t-shirt will be signing off now from the CERN Control
Room, but I just heard Brian Cox speaking eloquently on BBC radio about the benefits
of fundamental science, and getting everybody more involved in it.

LHC@home is a great way to get involved. The volunteers's PCs have helped the
accelerator physicists do the simulations that in turn made today such a big success, thank you each and every one!

2008-09-10: Project News 10.09.2008 14:07 BST

Live from CERN - News from the guy wearing an LHC@home t-shirt
The beam just circulated counterclockwise for the first time, huge applause. At this rate they should have the Higg's nailed by the weekend (just kidding).