WEBVTT

00:00.000 --> 00:10.000
Okay, so, hello everyone.

00:10.000 --> 00:13.000
Thank you for letting me present today at first them.

00:13.000 --> 00:17.000
I would like to talk today about unify full stack performance analysis

00:17.000 --> 00:21.000
and automated computer assistant design at the same.

00:21.000 --> 00:25.000
In relation to the recent project, which we have started,

00:25.000 --> 00:27.000
called Adaptest.

00:27.000 --> 00:30.000
So, to start things off, a few introductory words about myself.

00:30.000 --> 00:31.000
So, hello.

00:31.000 --> 00:32.000
I'm Max.

00:32.000 --> 00:35.000
Currently, a junior performance research engineer,

00:35.000 --> 00:37.000
working for nearly three years at a time.

00:37.000 --> 00:42.000
And I will be shown a PhD student actually in April this year,

00:42.000 --> 00:44.000
starting at a time and a PhD student.

00:44.000 --> 00:49.000
And when it comes to my background, I'm a computer scientist by training.

00:49.000 --> 00:53.000
I graduated with a master's degree from Imperial College in 2002.

00:53.000 --> 01:00.000
With a thesis dedicated to performance analysis closer to hardware and very short words.

01:00.000 --> 01:07.000
And my interest involved about everything related to software and hardware,

01:07.000 --> 01:10.000
specifically at the boundary of software and hardware.

01:10.000 --> 01:14.000
So, performance analysis, computer architecture, compilers,

01:14.000 --> 01:18.000
operating systems, FPGA, et cetera.

01:19.000 --> 01:23.000
Especially in the context of applications to physics and space research,

01:23.000 --> 01:29.000
which is one of the main reasons why I work at the moment.

01:29.000 --> 01:31.000
And to this time as to my site,

01:31.000 --> 01:34.000
this is my first form ever.

01:34.000 --> 01:39.000
So, apart from presenting Adaptest, I'm also here to learn and network.

01:39.000 --> 01:43.000
And I will be happy to hear constructive feedback from you after the presentation.

01:43.000 --> 01:46.000
And the second disclaimer, I'm not a physicist.

01:46.000 --> 01:50.000
So, if you have any questions related to physics we do at certain,

01:50.000 --> 01:53.000
I may not be the guide person to answer this.

01:53.000 --> 01:57.000
Even though I will try to explain some physics we do in our organization,

01:57.000 --> 01:59.000
if time allows.

01:59.000 --> 02:05.000
And, but what I can recommend to you is actually going to the same open source

02:05.000 --> 02:10.000
stand we have at first them in building K.

02:10.000 --> 02:15.000
So, I would like to ask you a question, who knows what sign is?

02:15.000 --> 02:17.000
Wow, that's a lot.

02:17.000 --> 02:24.000
So, I have this slide, explanatory slide, just for those of you who are not familiar with our organization.

02:24.000 --> 02:28.000
So, sir is the European Laboratory for particle physics,

02:28.000 --> 02:31.000
which is the world leading particle physics facility,

02:31.000 --> 02:33.000
located in Geneva in Switzerland.

02:33.000 --> 02:38.000
Specifically, it's literally at the border between Switzerland and France,

02:38.000 --> 02:40.000
next to Geneva.

02:40.000 --> 02:43.000
And particle physics wise, it's best known for the large pattern collider,

02:43.000 --> 02:46.000
which is one of the biggest scientific machines in the world,

02:46.000 --> 02:49.000
where particles are accelerated and collided with each other,

02:49.000 --> 02:53.000
with more than 99.99% of speed of light.

02:53.000 --> 02:57.000
And this is where the Higgs boson was discovered in 2012,

02:57.000 --> 03:03.000
which explains the origin, which is in the origin of mass.

03:03.000 --> 03:06.000
But it's also known as the birthplace of the World Wide Web,

03:06.000 --> 03:10.000
for example, computer science wise in the 1990s.

03:10.000 --> 03:13.000
And while we, our main research involves a particle physics,

03:13.000 --> 03:19.000
we also do some research in related fields like astrophysics and cosmology.

03:19.000 --> 03:22.000
But if you think that computing is just a small after-for-at-sang,

03:22.000 --> 03:26.000
even that we are at physics lab, you cannot be more long than that.

03:26.000 --> 03:29.000
Actually, computing is one of the three main pillars at cell,

03:29.000 --> 03:33.000
alongside accelerators and detectives.

03:33.000 --> 03:35.000
And there is a photo, this is very simple.

03:35.000 --> 03:37.000
Just take a look at the diagram on the guide,

03:37.000 --> 03:39.000
showing our accelerator complex.

03:40.000 --> 03:43.000
This is a multitude of machines, where particles are accelerated

03:43.000 --> 03:45.000
and collided not just with each other,

03:45.000 --> 03:49.000
but also against a fixed target in case of smaller accelerators.

03:49.000 --> 03:53.000
And at many collision points in these accelerators,

03:53.000 --> 03:57.000
I will all talk here about the latch that are collided here.

03:57.000 --> 04:01.000
We have installed particle detectors,

04:01.000 --> 04:04.000
which collect information about interesting physics happening

04:04.000 --> 04:08.000
at the time when accelerated particles collided with each other.

04:08.000 --> 04:12.000
And because, again, in the context of the LFC,

04:12.000 --> 04:16.000
we have one particle collision, every 25 nanoseconds.

04:16.000 --> 04:22.000
This means that this particle detectors generate an enormous amount of data.

04:22.000 --> 04:26.000
Not to mention the fact that the complexity of the machines we have

04:26.000 --> 04:30.000
means that controlling, monitoring and operating and

04:30.000 --> 04:32.000
synchronizing everything with each other,

04:32.000 --> 04:37.000
so that all the particles know where to go inside the accelerators.

04:37.000 --> 04:39.000
It's not a straightforward task.

04:39.000 --> 04:42.000
So all of this creates computing challenges at

04:42.000 --> 04:45.000
virtually all levels from embedded and real-time computing

04:45.000 --> 04:50.000
up to high-performance computing and distributed computing.

04:50.000 --> 04:54.000
And just I want to show you the four major experiments

04:54.000 --> 04:57.000
with particle detectors installed on the LFC.

04:57.000 --> 05:02.000
So we have CMS, LFCB, Atlas, and Alice.

05:02.000 --> 05:06.000
So because this is the last of the performance dev group,

05:06.000 --> 05:08.000
I want to talk about performance.

05:08.000 --> 05:11.000
So obviously performance is very important for us,

05:11.000 --> 05:14.000
and we do several performance analysis tasks,

05:14.000 --> 05:18.000
but it's not without problems.

05:18.000 --> 05:23.000
So this list is just the tip of the iceberg,

05:23.000 --> 05:27.000
and it's not an exhaustive list in no particular order.

05:27.000 --> 05:32.000
So for example, we encounter notoriously

05:32.000 --> 05:35.000
deflagmentation of performance analysis tools.

05:35.000 --> 05:38.000
We have a very, we use a variety of platforms.

05:38.000 --> 05:42.000
And essentially for inter CPUs, you have inter-video.

05:42.000 --> 05:44.000
For AMD CPUs, you have AMD Upro.

05:44.000 --> 05:47.000
For Nvidia GPUs, you have Nvidia insight.

05:47.000 --> 05:51.000
For AMD GPUs, you have rock profiler and similar tools.

05:51.000 --> 05:54.000
For some other exotic architecture,

05:54.000 --> 05:58.000
you have some other exotic fancy profiler.

05:58.000 --> 06:02.000
And many of these tools are either proprietary,

06:03.000 --> 06:06.000
not user-friendly, or not compatible with each other.

06:06.000 --> 06:09.000
And this makes life difficult for us,

06:09.000 --> 06:13.000
especially when we want to make comparisons between various platforms,

06:13.000 --> 06:16.000
in order to decide how to optimize our software,

06:16.000 --> 06:18.000
or for example, what hardware to buy.

06:18.000 --> 06:24.000
And in some cases, we don't even have any nice performance analysis tools at all.

06:24.000 --> 06:28.000
For example, in the context of some accelerator control systems,

06:28.000 --> 06:29.000
embedded computing.

06:29.000 --> 06:32.000
Another problem is the necessity of optimizing

06:32.000 --> 06:34.000
and porting codes for architecture,

06:34.000 --> 06:38.000
especially in the context of how technology is computing,

06:38.000 --> 06:41.000
which becomes more and more popular in our cycles.

06:41.000 --> 06:43.000
And with how technology is computing,

06:43.000 --> 06:45.000
you have a variety of programming models.

06:45.000 --> 06:49.000
You have CUDA, GoCAM, you have also more unified programming models,

06:49.000 --> 06:54.000
like Cococic, there's also an MP, for example.

06:54.000 --> 06:58.000
And the thing is that our software used for processing data

06:58.000 --> 07:02.000
from the Large Hadron Collider and other experiments is very large.

07:02.000 --> 07:06.000
We are talking about the magnitude of millions of lines of code.

07:06.000 --> 07:10.000
So you can imagine that porting and optimizing everything,

07:10.000 --> 07:14.000
so that it works flawlessly on various architectures,

07:14.000 --> 07:18.000
is not easy, let's say.

07:18.000 --> 07:22.000
Another thing is related to our complexity of systems,

07:22.000 --> 07:25.000
and a mixture of programming languages and models

07:25.000 --> 07:28.000
across the entire complex.

07:28.000 --> 07:31.000
And this means that analyzing everything in one go,

07:31.000 --> 07:39.000
or at all, performance-wise, is either difficult or close to impossible.

07:39.000 --> 07:44.000
Not to mention problems with resource limitations,

07:44.000 --> 07:49.000
like developer hardware, network bandwidth, memory developer code, etc.

07:49.000 --> 07:52.000
strict time constraints in real-time systems,

07:52.000 --> 07:54.000
for example, deployed in accelerator controls,

07:54.000 --> 07:58.000
or data filtering systems at the Large Hadron Collider,

07:58.000 --> 08:01.000
which I hope I will be able to talk about

08:01.000 --> 08:05.000
an slightly more detailed talk at the end of the presentation.

08:05.000 --> 08:09.000
And the last point is probably obvious,

08:09.000 --> 08:12.000
but there might be some optimizations,

08:12.000 --> 08:15.000
because the entire stack, like in software hardware,

08:15.000 --> 08:19.000
or systems in general, which we are not aware of yet.

08:19.000 --> 08:23.000
And maybe this would unlock some very big performance gains

08:23.000 --> 08:26.000
that would allow us to discover more physics,

08:26.000 --> 08:28.000
but we are not aware of these optimizations.

08:28.000 --> 08:31.000
And this is also a problem by itself.

08:31.000 --> 08:34.000
And by performance, I don't mean just runtime.

08:34.000 --> 08:38.000
Now, I also mean stuff like energy efficiency,

08:38.000 --> 08:41.000
because sustainable computing is a big topic

08:41.000 --> 08:43.000
in our own sectors at some.

08:43.000 --> 08:46.000
Not to mention some more trivial stuff,

08:46.000 --> 08:49.000
like power and cooling constraints we have.

08:49.000 --> 08:53.000
And given the recent trends in the technology world,

08:53.000 --> 08:57.000
like the slowdown of most slow and then at scaling,

08:57.000 --> 09:02.000
only algorithmic, only hardware, or only system optimizations

09:02.000 --> 09:04.000
are not enough in isolation.

09:04.000 --> 09:07.000
So you cannot do just one of these alone.

09:07.000 --> 09:11.000
You need to look at the big picture and do everything together nowadays.

09:11.000 --> 09:14.000
So as a response to these challenges,

09:14.000 --> 09:16.000
please say hello to adapters.

09:16.000 --> 09:19.000
And early phase, comprehensive and architecture,

09:19.000 --> 09:22.000
agnostic performance analysis tool,

09:22.000 --> 09:24.000
addressing your software, hardware, and system needs,

09:24.000 --> 09:27.000
all at once, both today and tomorrow.

09:27.000 --> 09:30.000
And adapters was born as part of the cycles project,

09:30.000 --> 09:32.000
funded by the European Union,

09:32.000 --> 09:35.000
and it has four main characteristics.

09:35.000 --> 09:37.000
The first one is the modular design,

09:37.000 --> 09:40.000
which makes adapters always up to date with the market,

09:41.000 --> 09:47.000
and it makes us adapt this,

09:47.000 --> 09:54.000
able to analyze performance of any workflow you can imagine,

09:54.000 --> 09:58.000
and any hardware and any system combination you can imagine,

09:58.000 --> 10:03.000
as long as there is support provided forward by contributors,

10:03.000 --> 10:09.000
either from our team or even from some of you in the future.

10:09.000 --> 10:13.000
And this is provided for modules for system and hardware components,

10:13.000 --> 10:17.000
and plugins, in case of workflows.

10:17.000 --> 10:20.000
But plugins are currently not supported in an artist,

10:20.000 --> 10:23.000
but the support for these plugins will be available soon.

10:23.000 --> 10:28.000
And don't worry, if there is something which is not 100% clear yet,

10:28.000 --> 10:31.000
I will have an explanatory diagram about modules,

10:31.000 --> 10:34.000
plugins, and there are adapters workflow in a few moments,

10:34.000 --> 10:38.000
and I hope this will clear out any confusion you may have at the moment.

10:38.000 --> 10:41.000
The second characteristic is architecture agnosticism.

10:41.000 --> 10:45.000
So it doesn't matter whether you have an Intel CPU, AMD CPU,

10:45.000 --> 10:48.000
ARM, R, R, R, R5, and VGA GPU, something else,

10:48.000 --> 10:50.000
like an FPGA, or a customer accelerator,

10:50.000 --> 10:55.000
adapters gets you covered, as long as there is a module for that.

10:55.000 --> 11:00.000
And the fact characteristic is related to one of the modules we already provide,

11:00.000 --> 11:04.000
and it's related to analysis of on CPU and of CPU activity

11:04.000 --> 11:07.000
of all of your code running on Linux.

11:07.000 --> 11:11.000
And this is applicable to all threats and processes found by your code,

11:11.000 --> 11:15.000
along with any low-level software hardware interactions

11:15.000 --> 11:18.000
that may happen when your code is run.

11:18.000 --> 11:20.000
For example, paid falls, cash messages,

11:20.000 --> 11:23.000
retired instructions, et cetera.

11:23.000 --> 11:25.000
And last but not least,

11:25.000 --> 11:28.000
I was given that it's for them, we are open source,

11:28.000 --> 11:30.000
coming from saying for the benefit of all,

11:30.000 --> 11:34.000
and our project is open to contributions from you guys.

11:34.000 --> 11:37.000
And I will talk about how you can help us out,

11:37.000 --> 11:39.000
closer to the end of the presentation.

11:39.000 --> 11:45.000
So, an update is meant to tackle the challenges I just mentioned,

11:45.000 --> 11:48.000
and help advance computing at saying,

11:48.000 --> 11:53.000
and obviously beyond, because one of the missions of saying

11:53.000 --> 11:57.000
is contributing back to the society.

11:57.000 --> 12:00.000
And given that we have the variety of platforms,

12:00.000 --> 12:02.000
along with various use cases,

12:02.000 --> 12:08.000
the eventual goal of adopters is becoming not just a performance analysis tool,

12:08.000 --> 12:11.000
unifying performance analysis,

12:11.000 --> 12:14.000
applications and APIs,

12:14.000 --> 12:19.000
but also becoming a full-stack system design and compilation,

12:19.000 --> 12:20.000
along with software hardware,

12:20.000 --> 12:23.000
co-designed tool across the entire computing spectrum.

12:23.000 --> 12:28.000
And by this computing, up to XSK and HPC computing.

12:28.000 --> 12:31.000
So, to get started with adopters,

12:31.000 --> 12:36.000
you have basically this flow here,

12:36.000 --> 12:40.000
and you start with defining your computing workflow to be analyzed.

12:40.000 --> 12:43.000
And this is where plugins come into play,

12:43.000 --> 12:46.000
but the current version, just but one caveat here,

12:46.000 --> 12:48.000
is that the current version of adopters only supports

12:48.000 --> 12:51.000
running commands as part of your workflow.

12:51.000 --> 12:54.000
So, if you have a program which has already been compiled,

12:54.000 --> 12:56.000
and you want to run it and profile it,

12:56.000 --> 12:59.000
this is what running commands is about.

12:59.000 --> 13:02.000
But in the future, it will be possible to construct

13:02.000 --> 13:04.000
most sophisticated workflows,

13:04.000 --> 13:06.000
consisting of several plugins,

13:06.000 --> 13:08.000
such as plugins for running code,

13:08.000 --> 13:10.000
compiler, to some intermediate representation,

13:10.000 --> 13:12.000
like LLVMIR.

13:12.000 --> 13:15.000
And on the other side, you define a computer system,

13:15.000 --> 13:18.000
which your workflow should be analyzed against.

13:18.000 --> 13:21.000
And this could be the system consists of notes and entities.

13:21.000 --> 13:24.000
Notes can be imagined as computer peripherals

13:24.000 --> 13:26.000
with modules attached to it,

13:26.000 --> 13:29.000
and these are responsible for modeling

13:29.000 --> 13:33.000
and or profiling your hardware assistant component,

13:33.000 --> 13:37.000
like GPU, CPU, FPGA, memory,

13:37.000 --> 13:39.000
persistent storage or something else.

13:39.000 --> 13:42.000
And the generality of notes can be changed

13:42.000 --> 13:44.000
without any issues here.

13:44.000 --> 13:47.000
It's all depends on the availability of modules.

13:47.000 --> 13:50.000
So, if you want, for example, to speed CPU into several parts,

13:50.000 --> 13:52.000
like one for cartridges, one for instructions,

13:52.000 --> 13:53.000
and one for something else,

13:53.000 --> 13:56.000
you can do it if there are modules available for this.

13:56.000 --> 13:59.000
And similar for example, similar for other notes.

13:59.000 --> 14:02.000
And entities can be imagined as computer servers,

14:02.000 --> 14:06.000
consisting of these, consisting of these notes.

14:06.000 --> 14:08.000
And these entities can be, for example,

14:08.000 --> 14:10.000
connected by, by nets, I'm networking,

14:10.000 --> 14:13.000
or some other connection here.

14:13.000 --> 14:15.000
And once you define these two things,

14:15.000 --> 14:19.000
the workflow gets converted to the adaptive intermediate representation,

14:19.000 --> 14:23.000
which is at the moment and form of state-full,

14:23.000 --> 14:25.000
multiflow data graphs,

14:25.000 --> 14:28.000
researched by the scalable parallel computing lab

14:28.000 --> 14:29.000
at ATH2G.

14:29.000 --> 14:31.000
There was actually one talk about,

14:31.000 --> 14:36.000
about the day-situna company,

14:36.000 --> 14:40.000
yesterday in the AI Planberg's dev room,

14:40.000 --> 14:43.000
where SDFGs were also used.

14:43.000 --> 14:47.000
So, if you want to know more information about SDFGs,

14:47.000 --> 14:51.000
I also invite you to watch that talk once it becomes available.

14:51.000 --> 14:56.000
And when the AI is ready from the workflow,

14:56.000 --> 14:58.000
the modules come into play,

14:58.000 --> 15:02.000
and start doing performance analysis of this,

15:02.000 --> 15:03.000
of this AIR.

15:03.000 --> 15:06.000
So that you can extract your performance insights

15:06.000 --> 15:08.000
about your workflow.

15:08.000 --> 15:12.000
And one important thing about,

15:12.000 --> 15:15.000
also about modules, is that they decide

15:15.000 --> 15:18.000
how their performance analysis results

15:18.000 --> 15:20.000
should be displayed to the user afterwards,

15:20.000 --> 15:23.000
because it's not enough to do just performance analysis.

15:23.000 --> 15:25.000
You want to browse these results

15:25.000 --> 15:28.000
and infer some conclusions from this.

15:28.000 --> 15:31.000
And for this, we have a dedicated program called

15:31.000 --> 15:35.000
Adaptive Analyzer, which produces an interactive website

15:35.000 --> 15:40.000
that you can browse to view your analysis results.

15:40.000 --> 15:43.000
And I have a demo video that I will show you in a few seconds.

15:43.000 --> 15:47.000
And I mentioned that Adaptive is meant to be

15:47.000 --> 15:50.000
an automatic computer system design tool.

15:50.000 --> 15:54.000
And this will be achieved in the future,

15:54.000 --> 15:57.000
in the course of my PhD research,

15:57.000 --> 16:00.000
by constructing the system graph automatically

16:00.000 --> 16:02.000
with hints from a user,

16:02.000 --> 16:05.000
and where obviously with a workflow.

16:05.000 --> 16:07.000
So for example, if you provide just a workflow

16:07.000 --> 16:10.000
and a partial system graph here,

16:10.000 --> 16:13.000
which is in this case,

16:13.000 --> 16:16.000
an alternative of two CPUs, CPU1 and CPU2,

16:16.000 --> 16:20.000
connected to a GPU and optionally to a custom accelerator,

16:20.000 --> 16:23.000
then Adaptive will determine automatically.

16:23.000 --> 16:25.000
What CPU should be picked,

16:25.000 --> 16:28.000
either CPU1 or CPU2,

16:29.000 --> 16:33.000
and whether a custom accelerator should be connected

16:33.000 --> 16:34.000
to your system.

16:34.000 --> 16:39.000
And what parts of your workflow should be done

16:39.000 --> 16:41.000
on what compute you need?

16:41.000 --> 16:45.000
Like CPU, GPU, custom accelerator et cetera.

16:45.000 --> 16:47.000
And this particular system graph

16:47.000 --> 16:50.000
always contains only compute units.

16:50.000 --> 16:53.000
But for example, Adaptive will be able to construct

16:53.000 --> 16:56.000
automatically, memory characteristics,

16:56.000 --> 17:01.000
storage solutions, networking solutions et cetera.

17:01.000 --> 17:04.000
Basically everything depends on the,

17:04.000 --> 17:07.000
everything depends on the work being done on modules.

17:07.000 --> 17:10.000
And this is our ambitious roadmap

17:10.000 --> 17:13.000
for the next several years.

17:13.000 --> 17:17.000
But we already provide some modules now

17:17.000 --> 17:19.000
for performance analysis.

17:19.000 --> 17:21.000
We have Linux, which is based on

17:21.000 --> 17:25.000
as an instance on Linux path, with our own custom patches.

17:25.000 --> 17:30.000
But in order to have better maintainability of the module,

17:30.000 --> 17:32.000
we are moving soon to an in-house equivalent,

17:32.000 --> 17:34.000
based on path event opens the score,

17:34.000 --> 17:36.000
or EBPF soon.

17:36.000 --> 17:38.000
And the functionality of Linux path

17:38.000 --> 17:41.000
is sampling on CPU and of CPU activity

17:41.000 --> 17:44.000
of all threats and processes spawned by your code.

17:44.000 --> 17:48.000
Well, minimizing the risk of getting broken profile stacks,

17:48.000 --> 17:50.000
as long as your programs are compatible

17:50.000 --> 17:53.000
with phone numbers, phone now.

17:53.000 --> 17:56.000
Linux path also performs cache aware of offline profiling.

17:56.000 --> 17:58.000
Thanks to the integration with the current tool

17:58.000 --> 18:01.000
from NSKD, which is one of our academic collaborators

18:01.000 --> 18:03.000
in Lisbon and Portugal.

18:03.000 --> 18:07.000
And it supports custom something-based path events

18:07.000 --> 18:09.000
for profiling more intricate interactions

18:09.000 --> 18:11.000
like paid sources, I said.

18:11.000 --> 18:14.000
And we also have one more module and VGPU

18:14.000 --> 18:17.000
for tracing CUDA API calls at the moment,

18:17.000 --> 18:19.000
driver and runtime API to be specific.

18:19.000 --> 18:22.000
But more features are coming soon as time passes

18:22.000 --> 18:24.000
and users provide their feedback,

18:24.000 --> 18:26.000
because we have several use cases

18:26.000 --> 18:28.000
at certain utilizing and VGPUs.

18:28.000 --> 18:31.000
And we have two more modules, which are either working progress

18:31.000 --> 18:32.000
or planned.

18:32.000 --> 18:34.000
One is about analysis of any metrics obtained

18:34.000 --> 18:36.000
at the performance channel tools, like power consumption

18:36.000 --> 18:40.000
or custom other custom metrics.

18:40.000 --> 18:43.000
And one is about analysis of environmental software,

18:43.000 --> 18:46.000
running without any operating systems on systems on trips,

18:47.000 --> 18:50.000
like selling sources with CPUs and FPGAs.

18:50.000 --> 18:53.000
So what are the things we analyze already at San,

18:53.000 --> 18:55.000
or we want to analyze at San overall?

18:55.000 --> 18:58.000
For example, latency, throughput,

18:58.000 --> 19:00.000
usage of memory and other resources,

19:00.000 --> 19:02.000
parallelization opportunities,

19:02.000 --> 19:06.000
so that we can deploy GPUs or other accelerators.

19:06.000 --> 19:08.000
Utilization of low-level hardware features,

19:08.000 --> 19:12.000
like pipelining or vectorized instructional CPUs,

19:12.000 --> 19:15.000
performance of code compilation and GIT, if used,

19:15.000 --> 19:17.000
and this is more about compilation,

19:17.000 --> 19:19.000
configuration and options,

19:19.000 --> 19:24.000
rather than optimizing compile implementations for now.

19:24.000 --> 19:27.000
But we are open to investigating

19:27.000 --> 19:29.000
regarding compilers to some point,

19:29.000 --> 19:33.000
maybe in the five future, let's say.

19:33.000 --> 19:35.000
And last but not least,

19:35.000 --> 19:36.000
energy and higher consumption,

19:36.000 --> 19:38.000
in the context again of sustainable computing

19:38.000 --> 19:40.000
and cooling constraints we have.

19:40.000 --> 19:44.000
And when it comes to the state of adaptive features,

19:45.000 --> 19:48.000
related to these metrics we want to analyze,

19:48.000 --> 19:51.000
we can already do latency

19:51.000 --> 19:56.000
and to the usage of low-level hardware features.

19:56.000 --> 19:59.000
Energy and higher consumption can be done

19:59.000 --> 20:01.000
already, I would say,

20:01.000 --> 20:06.000
because it supports a profiling energy consumption,

20:06.000 --> 20:08.000
so the Linux performance should be

20:08.000 --> 20:11.000
might be able to advance some profiling

20:11.000 --> 20:13.000
related to energy consumption as well,

20:13.000 --> 20:16.000
but I didn't market as supported by adaptive

20:16.000 --> 20:18.000
because I have never tested it,

20:18.000 --> 20:22.000
and so I'd rather play safe for now

20:22.000 --> 20:25.000
and say that it's not supported yet.

20:25.000 --> 20:29.000
But all the blue texts mean that,

20:29.000 --> 20:33.000
I mean that basically the support of adaptive

20:33.000 --> 20:36.000
for these metrics is either already planned,

20:36.000 --> 20:39.000
or we can be added in the future.

20:39.000 --> 20:42.000
Again, thanks to the modules.

20:42.000 --> 20:45.000
So you want to get started with adaptive

20:45.000 --> 20:47.000
in practical times.

20:47.000 --> 20:48.000
Easy, easy.

20:48.000 --> 20:50.000
It's open-source, of course.

20:50.000 --> 20:52.000
And you can get it from here from our website,

20:52.000 --> 20:54.000
adaptus.web.cent.ch.

20:54.000 --> 20:57.000
But if you don't want to read

20:57.000 --> 20:58.000
polling documentation,

20:58.000 --> 21:00.000
etc, you can just go to GitHub and see

21:00.000 --> 21:02.000
our repositories.

21:02.000 --> 21:04.000
And this is also fine.

21:04.000 --> 21:07.000
But you will probably want to visit our website anyway,

21:07.000 --> 21:08.000
because it's the website,

21:08.000 --> 21:10.000
which has the installation instructions

21:10.000 --> 21:12.000
and that's already available as an early development version.

21:12.000 --> 21:14.000
So it's not production yet yet.

21:14.000 --> 21:16.000
And form of a source code,

21:16.000 --> 21:18.000
but other variants are coming,

21:18.000 --> 21:19.000
like pre-built binary.

21:19.000 --> 21:22.000
Because obviously we want adaptive to be as accessible,

21:22.000 --> 21:25.000
as possible, to as many people as possible.

21:25.000 --> 21:27.000
And once you set up adaptive,

21:27.000 --> 21:29.000
getting started with which, as I said, is easy.

21:29.000 --> 21:31.000
So we want adaptive to be as accessible,

21:31.000 --> 21:34.000
as possible, to as many people as possible.

21:34.000 --> 21:36.000
And once you set up adaptive,

21:36.000 --> 21:39.000
getting started with which, as I said, is easy.

21:39.000 --> 21:44.000
Let's start with defining a one entity system graph.

21:44.000 --> 21:46.000
Which has a CPU node,

21:46.000 --> 21:50.000
with the Linux-perf module attached to it,

21:50.000 --> 21:53.000
and then now with the node being placed inside

21:53.000 --> 21:55.000
an entity called entity one.

21:55.000 --> 21:58.000
So your only need to do is to define a system graph

21:58.000 --> 22:01.000
in a YAML file here using the syntax.

22:01.000 --> 22:03.000
I will not dive into too much detail about

22:03.000 --> 22:05.000
how to guide the syntax.

22:05.000 --> 22:07.000
You can check the documentation from this.

22:08.000 --> 22:10.000
And once you save this file,

22:10.000 --> 22:12.000
you're going to adapt this by saying,

22:12.000 --> 22:14.000
by running adapt this dash s,

22:14.000 --> 22:15.000
system requirements,

22:15.000 --> 22:17.000
which points out to adapt this dot.

22:17.000 --> 22:20.000
Okay, my system graph file is in system requirements.

22:20.000 --> 22:22.000
Dash D, which means,

22:22.000 --> 22:24.000
okay, I want to provide a comment on me

22:24.000 --> 22:27.000
because this is the only option which is supported at the moment.

22:27.000 --> 22:28.000
Followed by double dashes,

22:28.000 --> 22:30.000
and the comment to be profiled without any quotes.

22:30.000 --> 22:32.000
And once you earn it,

22:32.000 --> 22:35.000
you wait until it produces something along these lines.

22:35.000 --> 22:38.000
With the performance analysis result,

22:38.000 --> 22:39.000
I get to hear.

22:39.000 --> 22:42.000
And afterwards, you move this result directory

22:42.000 --> 22:44.000
to some empty directory of your choice.

22:44.000 --> 22:46.000
For example, result underscore div,

22:46.000 --> 22:48.000
and then you can adapt this analyzer

22:48.000 --> 22:51.000
while pointing it to result underscore div.

22:51.000 --> 22:54.000
And then you just open the website in your web browser,

22:54.000 --> 22:56.000
like you do with Jupyter node books,

22:56.000 --> 22:57.000
and you are done.

22:57.000 --> 22:58.000
Have fun.

22:58.000 --> 23:00.000
So I have the demonstration video,

23:00.000 --> 23:04.000
which I will play to you right now,

23:04.000 --> 23:06.000
because I will start not to do live demos.

23:06.000 --> 23:09.000
So no live demos today, so okay.

23:09.000 --> 23:12.000
So this demonstration presents adaptis analyzer

23:12.000 --> 23:14.000
with two performance analysis sessions,

23:14.000 --> 23:17.000
then on some snippets of the good framework,

23:17.000 --> 23:21.000
which is a widely used analysis framework

23:21.000 --> 23:23.000
for experimental data at sound.

23:23.000 --> 23:25.000
Please don't confuse this with good accounts.

23:25.000 --> 23:27.000
This is a completely different thing.

23:27.000 --> 23:29.000
If you want to check out good,

23:29.000 --> 23:32.000
you can scan the QR code here to visit

23:32.000 --> 23:34.000
our, to visit the good web site,

23:34.000 --> 23:36.000
or just visit go to dot set.

23:36.000 --> 23:38.000
I think this is simple,

23:38.000 --> 23:40.000
I just don't remember.

23:40.000 --> 23:42.000
So yeah, so let's do the demo.

23:42.000 --> 23:46.000
This is the initial screen of adaptis analyzer.

23:46.000 --> 23:49.000
When you can select a performance analysis session,

23:49.000 --> 23:51.000
look at the top cover books.

23:51.000 --> 23:53.000
Once you make your selection,

23:53.000 --> 23:55.000
you start with the system graph view

23:55.000 --> 23:57.000
with empty different nodes.

23:57.000 --> 23:59.000
Double click any node and select the module

23:59.000 --> 24:01.000
to open its results.

24:01.000 --> 24:03.000
In this case, we have little spills.

24:03.000 --> 24:06.000
With the time and view of threads and processes,

24:06.000 --> 24:08.000
with red parts corresponding to our CPU,

24:08.000 --> 24:11.000
and blue parts corresponding to our CPU activity.

24:11.000 --> 24:14.000
Right pick a thread to see its runtime,

24:14.000 --> 24:18.000
spawning stack trace and available analysis types.

24:21.000 --> 24:24.000
To open an analysis, just click it.

24:24.000 --> 24:26.000
Here, with a flag laughs,

24:26.000 --> 24:28.000
which allow you to quickly check the performance

24:28.000 --> 24:30.000
of specific parts of code.

24:30.000 --> 24:32.000
In this case, you can see information about

24:32.000 --> 24:34.000
all the time of the main thread.

24:34.000 --> 24:36.000
Flamethrfs are interactive.

24:36.000 --> 24:38.000
You can zoom in, zoom out,

24:38.000 --> 24:40.000
search with regular expressions,

24:40.000 --> 24:44.000
switch between not time-outed and time-outed expressions,

24:44.000 --> 24:45.000
et cetera.

24:45.000 --> 24:46.000
In case of all time,

24:46.000 --> 24:48.000
Flamethrfs are hotened code.

24:48.000 --> 24:50.000
The cold envelopes closer to blue

24:50.000 --> 24:52.000
are more off CPU,

24:52.000 --> 24:54.000
while the hot envelopes closer to that

24:54.000 --> 24:56.000
are more on CPU.

24:56.000 --> 24:57.000
If available,

24:57.000 --> 24:59.000
thank us for other metrics,

24:59.000 --> 25:01.000
can be opened in the top left combo box

25:01.000 --> 25:03.000
next to time-outed.

25:03.000 --> 25:05.000
You can also view source code of a block

25:05.000 --> 25:06.000
by right clicking it,

25:06.000 --> 25:08.000
and pick it view the code details.

25:08.000 --> 25:11.000
This opens a preview window with your code,

25:11.000 --> 25:13.000
with most critical lines highlighted

25:13.000 --> 25:14.000
in stronger shades of red,

25:14.000 --> 25:16.000
or blue, in case of wall time.

25:16.000 --> 25:19.000
You can hover over under right nine numbers

25:19.000 --> 25:21.000
to see more information about

25:21.000 --> 25:23.000
the contribution of keyword code lines.

25:23.000 --> 25:24.000
If needed,

25:24.000 --> 25:26.000
you can rename windows

25:26.000 --> 25:27.000
to whatever you like,

25:27.000 --> 25:29.000
or temporarily hide them.

25:29.000 --> 25:31.000
Just click the relevant button

25:31.000 --> 25:33.000
and a title bar.

25:33.000 --> 25:36.000
You can open multiple windows at the same time

25:36.000 --> 25:38.000
across multiple sessions.

25:38.000 --> 25:40.000
This allows you to make side-by-side comparisons.

25:40.000 --> 25:41.000
For example,

25:41.000 --> 25:43.000
we have the root code snippet

25:43.000 --> 25:46.000
where moving computation from CPU on a GPU

25:46.000 --> 25:48.000
makes the code more eye-opout

25:48.000 --> 25:52.000
by shrinking the highlighted compute flag of legends.

25:55.000 --> 25:57.000
There is an option

25:57.000 --> 25:59.000
of replacing text occurrences in flag graphs.

25:59.000 --> 26:00.000
For example,

26:00.000 --> 26:01.000
if you want to show them

26:01.000 --> 26:02.000
very long function names,

26:02.000 --> 26:05.000
you can use regular expressions here.

26:06.000 --> 26:08.000
Text the placement

26:08.000 --> 26:09.000
can be updated easily

26:09.000 --> 26:11.000
by right clicking the find-and-read

26:11.000 --> 26:13.000
place button and picking your replacement.

26:20.000 --> 26:22.000
Some fling of elements

26:22.000 --> 26:23.000
are compressed,

26:23.000 --> 26:25.000
and shown as light-panker blocks

26:25.000 --> 26:27.000
to save from entering the sources.

26:27.000 --> 26:29.000
This can be expanded easily

26:29.000 --> 26:31.000
just by clicking them.

26:35.000 --> 26:37.000
The Linux-5 module features also

26:37.000 --> 26:39.000
integration with the current tool.

26:39.000 --> 26:41.000
Took on use cache and way to go

26:41.000 --> 26:43.000
flight blocks along with points

26:43.000 --> 26:45.000
corresponding to specific aircraft blocks

26:45.000 --> 26:47.000
as shown on the guide.

26:49.000 --> 26:50.000
Here's an example

26:50.000 --> 26:52.000
of a slightly more complex system

26:52.000 --> 26:54.000
and look with information shown

26:54.000 --> 26:56.000
by the NVGPU module on the right.

26:56.000 --> 26:57.000
Currently,

26:57.000 --> 26:59.000
NVGPU supports tracing

26:59.000 --> 27:00.000
CUDA API calls

27:00.000 --> 27:02.000
for specific code legends

27:02.000 --> 27:04.000
that are related displayed on the timeline.

27:04.000 --> 27:05.000
To see detailed

27:05.000 --> 27:06.000
collusion times,

27:06.000 --> 27:08.000
write with a given region.

27:11.000 --> 27:12.000
OK,

27:12.000 --> 27:14.000
so that was the demonstration video

27:14.000 --> 27:16.000
of Adapis on the Liza.

27:16.000 --> 27:18.000
So now you may wonder

27:19.000 --> 27:21.000
why another performance analysis tool?

27:21.000 --> 27:23.000
Why another profile

27:23.000 --> 27:26.000
or another tool of a similar nature?

27:26.000 --> 27:28.000
So I have compiled

27:28.000 --> 27:31.000
this nice and compact comparison table

27:31.000 --> 27:33.000
between Adapis and other similar

27:33.000 --> 27:34.000
and my tent profiles.

27:34.000 --> 27:36.000
And I can also tell you

27:36.000 --> 27:38.000
that Adapis also is not meant

27:38.000 --> 27:40.000
to be just another profile.

27:40.000 --> 27:42.000
It means it's meant to be a tool

27:42.000 --> 27:45.000
which unifies

27:45.000 --> 27:46.000
which tries to unify

27:46.000 --> 27:49.000
the already existing ecosystem of performance analysis.

27:49.000 --> 27:51.000
And I'll also have automated computer

27:51.000 --> 27:52.000
system design, as I mentioned before.

27:52.000 --> 27:55.000
I want to talk about the table itself.

27:55.000 --> 27:58.000
I will only say that Adapis is

27:58.000 --> 27:59.000
hardware vendor potable.

27:59.000 --> 28:01.000
That's on this file.

28:01.000 --> 28:03.000
Analyze software hardware interactions

28:03.000 --> 28:05.000
to the extent supported by users

28:05.000 --> 28:06.000
a computer architecture.

28:06.000 --> 28:08.000
It's open source, of course.

28:08.000 --> 28:10.000
It's supported of CPU profiling.

28:10.000 --> 28:11.000
It has flexible support

28:11.000 --> 28:13.000
of categorized and custom architectures.

28:13.000 --> 28:15.000
Thanks to the modular design.

28:15.000 --> 28:17.000
And it will have a flexible support

28:17.000 --> 28:19.000
of multi-note systems soon.

28:19.000 --> 28:21.000
Once the entity support of Adapis

28:21.000 --> 28:23.000
will be expanded.

28:23.000 --> 28:26.000
So now let's talk about computing.

28:26.000 --> 28:28.000
As you may guess,

28:28.000 --> 28:30.000
performance encompasses all of computing.

28:30.000 --> 28:32.000
So we are sure that Adapis can help you

28:32.000 --> 28:33.000
in your work.

28:33.000 --> 28:36.000
So please try our tool out

28:36.000 --> 28:37.000
and give us feedback.

28:37.000 --> 28:39.000
And this will already make us happy.

28:39.000 --> 28:41.000
But obviously we will also be happy

28:41.000 --> 28:44.000
if we see code contributions from you guys.

28:44.000 --> 28:46.000
And this is because your help will actually

28:46.000 --> 28:49.000
push forward research and development

28:49.000 --> 28:51.000
on automated software hardware

28:51.000 --> 28:52.000
code design.

28:52.000 --> 28:53.000
And in case of modules,

28:53.000 --> 28:56.000
increase the visibility of software

28:56.000 --> 28:58.000
system and hardware products.

28:58.000 --> 29:00.000
Among users, such as CPUs, GPUs,

29:00.000 --> 29:02.000
custom accelerators, etc.

29:02.000 --> 29:04.000
And by contributing to Adapis,

29:04.000 --> 29:07.000
you will also help address performance problems

29:07.000 --> 29:08.000
we have at some.

29:08.000 --> 29:10.000
And therefore help science.

29:10.000 --> 29:12.000
And if you don't know what

29:12.000 --> 29:14.000
or how to contribute beyond trying the tool out,

29:14.000 --> 29:16.000
we have some ideas here.

29:16.000 --> 29:17.000
We have some ideas here.

29:17.000 --> 29:19.000
But these are obviously not,

29:19.000 --> 29:21.000
not binding at any point.

29:21.000 --> 29:22.000
If you have some other ideas,

29:22.000 --> 29:24.000
just feel free to talk to us.

29:24.000 --> 29:26.000
Or to send us a message.

29:26.000 --> 29:28.000
Adapis dash contact at send.ca.

29:28.000 --> 29:31.000
We read every email center,

29:31.000 --> 29:34.000
unless it's obvious spam, of course.

29:34.000 --> 29:37.000
And I can also tell you when it comes to

29:37.000 --> 29:39.000
Adapis addressing performance problems at

29:39.000 --> 29:40.000
send.

29:40.000 --> 29:42.000
We already have one success story related to it,

29:42.000 --> 29:44.000
even though it's an early face project.

29:44.000 --> 29:46.000
And this is related to porting

29:46.000 --> 29:48.000
some parts of the good framework from

29:48.000 --> 29:51.000
CPU to GPU using this,

29:51.000 --> 29:53.000
using this SQL programming model.

29:53.000 --> 29:56.000
Basically Adapis proved that

29:56.000 --> 29:59.000
the computation part of one of our benchmarks

29:59.000 --> 30:02.000
was slashed by 92%.

30:02.000 --> 30:06.000
When moved from CPU to GPU.

30:06.000 --> 30:10.000
When the overhead of running Adapis

30:10.000 --> 30:13.000
was less than 3% and the workload itself,

30:13.000 --> 30:15.000
which was analyzed,

30:15.000 --> 30:16.000
ran for several minutes.

30:16.000 --> 30:18.000
Like on the GPU, a ran for three minutes

30:18.000 --> 30:21.000
and on the CPU, it ran for more than 10 minutes

30:21.000 --> 30:23.000
as far as I say remember.

30:23.000 --> 30:26.000
So yeah, again, you feel free to contribute.

30:26.000 --> 30:32.000
And talk to us if you are interested in

30:32.000 --> 30:34.000
helping us out with Adapis.

30:34.000 --> 30:37.000
So now I don't have much time left,

30:37.000 --> 30:40.000
so I will basically quickly go through these slides

30:40.000 --> 30:42.000
about what actually happens computing

30:42.000 --> 30:44.000
wise when particles collide in the LHC.

30:44.000 --> 30:46.000
So I will use the example of the Atlas

30:46.000 --> 30:47.000
experiment here.

30:47.000 --> 30:49.000
So here you can see a slice of a particle

30:49.000 --> 30:50.000
detector.

30:50.000 --> 30:53.000
Particles collide here and they create

30:53.000 --> 30:56.000
another set of different particles,

30:56.000 --> 30:58.000
which go through various trackers and

30:58.000 --> 31:02.000
detectors meant to detect and map

31:02.000 --> 31:04.000
various types of particles.

31:04.000 --> 31:09.000
And as these events are detected by

31:09.000 --> 31:11.000
the detectors, they are digitized

31:11.000 --> 31:14.000
and send further down the pipeline for processing.

31:14.000 --> 31:16.000
But we have one collision happening

31:16.000 --> 31:17.000
every 25 nanoseconds.

31:17.000 --> 31:19.000
So we have an enormous amount of data,

31:19.000 --> 31:22.000
which cannot be all set and processed.

31:22.000 --> 31:24.000
So this is why we have filtering systems.

31:24.000 --> 31:27.000
So you can see that at the very beginning,

31:27.000 --> 31:29.000
we are talking about data rates

31:29.000 --> 31:32.000
of the magnitude of 60 telabytes per second.

31:32.000 --> 31:36.000
So we need to have filtering systems.

31:36.000 --> 31:39.000
And in this case we have the level 1 trigger,

31:39.000 --> 31:41.000
which meets to make a decision

31:41.000 --> 31:43.000
in a very short time frame,

31:43.000 --> 31:46.000
whether an event should be saved for further processing.

31:46.000 --> 31:49.000
And this is usually implemented using FPGAs.

31:49.000 --> 31:51.000
And then we have high level trigger,

31:51.000 --> 31:54.000
which decides in a slightly longer time frame,

31:54.000 --> 31:56.000
whether an event should be saved for further processing.

31:56.000 --> 31:59.000
And this uses more, let's say,

31:59.000 --> 32:01.000
we can use more information about

32:01.000 --> 32:04.000
a given event, thanks to the slightly longer time frame.

32:04.000 --> 32:10.000
And this HRT usually uses a PC farm of CPUs and GPUs.

32:10.000 --> 32:13.000
And then afterwards everything,

32:13.000 --> 32:17.000
then everything which passed these two stages of filtering,

32:17.000 --> 32:22.000
gets sent to our data centers for final analysis,

32:22.000 --> 32:25.000
so that we can produce papers to be published

32:25.000 --> 32:29.000
in scientific journals and conferences.

32:29.000 --> 32:33.000
But this is only a very small part of the computing pipeline.

32:33.000 --> 32:36.000
This is the, what happens in real time,

32:36.000 --> 32:38.000
when data is collected from the LHC,

32:38.000 --> 32:41.000
and these computing steps,

32:41.000 --> 32:45.000
what happens outside of the real time data collection at the LHC?

32:45.000 --> 32:48.000
We call this online computing,

32:48.000 --> 32:51.000
and that offline computing.

32:51.000 --> 32:53.000
And for offline computing,

32:53.000 --> 32:58.000
we operate on more than 500 petabytes of data per year,

32:58.000 --> 33:01.000
considering only the 2024, 2025 period,

33:01.000 --> 33:03.000
not terabytes, not terabytes,

33:03.000 --> 33:05.000
not megabytes petabytes.

33:05.000 --> 33:07.000
So this means that we need at least one

33:07.000 --> 33:09.000
million CPU calls to analyze everything,

33:09.000 --> 33:11.000
not to mention IO and storage.

33:11.000 --> 33:14.000
And this is why we have the worldwide LHC computing grid,

33:14.000 --> 33:16.000
which is a grid of distributed computers,

33:16.000 --> 33:18.000
with dedicated networking,

33:18.000 --> 33:20.000
separated and through several tiers.

33:20.000 --> 33:22.000
We have data centers in 10.

33:22.000 --> 33:25.000
14 sites, most national labs in tier 1,

33:25.000 --> 33:29.000
more than 130 sites in tier 2, most universities,

33:29.000 --> 33:32.000
and tier 3, where computers have smaller responsibility

33:32.000 --> 33:34.000
than in the first three tiers.

33:34.000 --> 33:36.000
If you want to learn more about WLCG,

33:36.000 --> 33:39.000
visit WLCG-public.sweb.sendlch,

33:39.000 --> 33:42.000
or scan the QR code here.

33:42.000 --> 33:45.000
So one collision every 25 nanoseconds

33:45.000 --> 33:47.000
is actually not enough for us, science,

33:48.000 --> 33:49.000
and science itself.

33:49.000 --> 33:52.000
So we are upgrading the LHC to high luminosity LHC,

33:52.000 --> 33:54.000
where the number of particle collisions

33:54.000 --> 33:57.000
will increase by a factor of 5 to 7.5.

33:57.000 --> 34:00.000
And this will raise significantly the computing demands.

34:00.000 --> 34:03.000
So you can see predictions here,

34:03.000 --> 34:05.000
made by the Atlas experiment.

34:05.000 --> 34:08.000
Unfortunately, I'm slowly running out of time,

34:08.000 --> 34:12.000
so I'm not able to explain this gap in more detail,

34:12.000 --> 34:16.000
but you can check the slides in your own convenience,

34:16.000 --> 34:19.000
because I've already uploaded them to the false network site.

34:19.000 --> 34:23.000
And data processing from the large hydrogen collider

34:23.000 --> 34:26.000
is not the only area where we have performance challenges.

34:26.000 --> 34:28.000
We also have accelerator control systems,

34:28.000 --> 34:31.000
posting and monitoring the accelerators in real time,

34:31.000 --> 34:33.000
as I mentioned at the beginning.

34:33.000 --> 34:37.000
But we also have simulations of beams inside the accelerators,

34:37.000 --> 34:39.000
where the x-sweb framework.

34:39.000 --> 34:43.000
And this is also this is related to upgrading

34:43.000 --> 34:47.000
a building new accelerators and upgrading our accelerator complex,

34:47.000 --> 34:53.000
so that we can, for example, again increase our number of particle collisions.

34:53.000 --> 34:58.000
So to finish things off, if we did insufficient performance,

34:58.000 --> 35:01.000
tasks and upgrades can't be finished on time or at all.

35:01.000 --> 35:06.000
And this means that our physics program is delayed,

35:06.000 --> 35:08.000
and we get fewer physics results,

35:08.000 --> 35:12.000
which is absolutely not what we want to achieve.

35:12.000 --> 35:15.000
We want to achieve good performance,

35:15.000 --> 35:20.000
and as many physics results as possible going forward.

35:20.000 --> 35:23.000
And with this being said, thank you very much for your attention,

35:23.000 --> 35:27.000
and you can visit our website at www.sens.ch,

35:27.000 --> 35:31.000
or visit our website at www.sens.ch.

35:31.000 --> 35:35.000
And once again, your contributions are more than welcome,

35:35.000 --> 35:38.000
just get in touch with us, and we will be happy to talk.

35:38.000 --> 35:41.000
Thank you so much.

