Minutes y07m12d22



Present: Aya, Keiichi, Mina, Mio, Shigeru (Chiba)
         Henrik (Stockholm)
         Carsten (PSU)
         David, Kotoyo, Sean (UW-Madison)

  (a) The Good run list update (Carsten)

   The good/bad run list has been updated and found
   from  the wiki page.
   Carsten will inform the EHE group with the new list
   whenever it is updated. The next update will tale place
   in early next year.

  (b) Brief report of the Standard Candle call (Aya)

   Refer to her minutes for the summary. The MC predicts
   more photons than the real data for distant DOMs,
   but this trend is reversed for the closest DOM to the candle.
   The systematic shift of the event-wise total NPE
   has been found around 25 % (MC gives less NPE).

  (c)  IC-22 analysis (Keiichi)
   Mostly the data is in good shape after excluding the bad run data,
   but the DOM launch time distribution looks strange.
   They have three peak structure.
   Some of them can be understood by the effect of the SLC
   when none of the neighbor DOMs are launched in the first launch,
   but they are in the second launch. The third peak
   is more mysterious. They look like events from double coincident
   muons, but we do not understand why they form a peak in
   the launch time distribution.

   As with the MC and data comparison, they look consistent each other
   but the azimuth angle distribution exhibits poorer agreement.
   Needs to be checked. Also number of events in the data is more
   than that of MC in low NPE region (around 10^4) pointed out by David.
   This is very likely to be induced by the energy threshold bias
   in the simulation which generates events above 10^5 GeV.
   Since the NPE of 10^4 is far below the region involved in EHE analysis
   we do not have to worry about it.

   Among what needs to be done, understanding of the droop
   simulation/correction and setting up high energy Corsika
   with the SYBILL are the most urgent. Keiichi looks for
   volunteers to work on them with him. Applying the droop
   correction to the Standard Candle data was proposed
   for testing the accuracy of the droop correction.

  (c)  IC-80 with EHE "supercut" (Aya)
   The EHE supercut was presented. It is still based on
   the logNPE and cosine of the zenith angle, but the different
   criteria is applied to events depending on the depth
   of the center-of-brightness position and whether ATWD
   records more photons than FADC or not. Both of them
   is an index of relation of the track geometry and
   the IceCube instrumentation volume. For example
   events with deeper cob z position is less likely
   to be vertical downgoing muons.

   Now the event rate and the resultant sensitivity is improved
   by 40 %. You expect one event per year from the GZK mechanism :-)

  (d) Time residual study (Kotoyo)

   Plots of the time residual distribution shows significant difference
   between the minimum bias data and the Corsika. The agreement
   in the DOMs in the dusty layer appears worse : Simulation assumed
   cleaner ice than the real data indicated. The dusty layer may be
   even more dusty. The simple scaling of the absorption length
   in the AHA ice model done by Kurt seems to reproduce the real data
   behavior, but not the simulation data. So the photonics can be
   a source. Another suggestion made by Henrik is to study the effect
   of the stopping muon events. Kotoyo will look the track length
   of the events to see if this is a problem.

  (e)  The track reco using the Cherenkov angle profile (Henrik)
   The hyper reconstruction uses the first hit timing with the assumption
   of in-ice photon propagation without scattering. Then the timing is
   determined by the track geometry radiating photons with the Cherenkov
   angle. The first results for the JULIeT EHE muon events look
   promising -  better then the linefit-based first guess.
   Henrik continues its development and see if how it works
   for events categorized by the EHE supercut. One thing to be done
   is to use Gulliver for its minimizer. It relies on the Root's
   minuit at present. David will help him implement the Gulliver.

  (f)  The waveform reco update (Sean)

   Comparison of the different energy reconstruction modules
   were made. N-channel, charge/length, rime, and the wfllh.
   The wfllh works the best, althogh Sean mentioned that
   its algorithm is based on the same principle with the rime.
   So we expect to see almost comparable results by the rime
   especially for high energy data in the end.

  (g) The Gulliver project update (David)

   Made a many but minor changes for
   the double muon reconstruction. David is also introducing
   some modifications for faster processing with
   calculating products without using computing logarithm.
   This is realized by a numerical trick to hold valuables
   within the floating range. It is cool!