This session will review the current challenges in assessing the costs of operating a distributed data infrastructure and the difficulties of projecting these into the future. It will also explore some of the possible business models for future European data services, and how they might be realised in practice.

Thursday 25th September 2014

Costs & Business Models SESSION
by Marcin Orcin, Barcelona Supercomputing Centre 
Marcin Ostasz graduated from the Technical University of Budapest at the Faculty of Electronics with an MSc degree and he also holds a Master of Business Administration (MBA) degree awarded by Oxford Brookes University in the UK. Marcin has over 13 years of combined experience gained at various technical, project management, operations management, business analysis and process improvement positions with organisations such as Nokia, American Power Conversion, Dell, GE and Barclays Bank. Marcin is currently working at Barcelona Supercomputing Centre as a business analyst. His tasks include supporting projects and organisations such as PRACE, the European Technology Platform, EUDAT and Mont-Blanc. He specialises in managing industrial relations, road-mapping, workshop management and business analysis.
by Alex Thirifays, The Danish National Archives
Alex Thirifays is a preservation specialist at The Danish National Archives. He obtained his MA in history in 1999, was working as an IT consultant for several years before his engagement in digital curation, where he focuses on cost modeling and strategic issues. Together with the Royal Library in Denmark, he has developed the Cost Model for Digital Preservation (CMDP) and published several papers and reports on this work (www.costmodelfordigitalpreservation.dk). He is currently working on two European projects, one on the costs of curation (the 4C project), and one on the standardisation of Information. 
 
by Jamie Shiers,  CERN
Jamie Shiers is currently manager of the Data Preservation for Long-Term Analysis in High Energy Physics (DPHEP) project that involves all of the major (HEP) laboratories and experiments worldwide and collaborates actively with other disciplines. DPHEP is focusing on solutions to long-term exa-scale data preservation: data-related standards are clearly of key importance.
He has been involved in Large-Scale Data Management, Object and Relational Database systems and services, distributed application design, implementation and support, operations and services for many years. Recently, he led the effort to harden the worldwide LHC Computing Grid (WLCG) services that helped scientists to turn data into discoveries in record time, using petascale distributed resources.
He has experience in a variety of standardization activities, including those directly related to data (the Object Data Management Group and the IEEE Storage Systems working group) as well as computer languages (ISO/IEC JTC1/SC22/WG5 – Fortran).
Discussion with panel members 
David S. H. Rosenthal, Stanford University Libraries
David S. H. Rosenthal has been an engineer in Silicon Valley for nearly a third of a century, including as a Distinguished Engineer at Sun Microsystems and employee #4 at NVIDIA. In the second half he has been the Chief Scientist of the LOCKSS (Lots Of Copies Keep Stuff Safe) Program at the Stanford University Libraries, working to preserve the web-published academic literature.

Rob Baxter, EPCC
Dr Rob Baxter graduated in 1989 with a BSc BSc (1st Class Hons) in Physics/Theoretical Physics from the University of St Andrews.He  then spent a year in Cambridge, doing Part III of the Maths Tripos and falling off punts before coming to the University of Edinburgh in 1990 to join the Particle Physics Theory Group. He completed my PhD in lattice QCD in 1993 and subsequently joined EPCC.He currently co-manages the Software Development Group at EPCC, involved  in commercial and scientific software development and working on projects such as SSI: the UK’s Software Sustainability Institute; ADMIRE: advanced data mining and Internet-scale data integration; Maxwell: how to build a supercomputer out of FPGAs; Condition-based Monitoring, with ITI Techmedia.

Ari Lukkarinen, CSC
Ari Lukkarinen has been working at CSC 13 years. Early years we mostly biased towards  high performance computing, but later he focused more on storage systems and storage services. From 2006 to 2009 Ari Lukkarinen was in charge of group maintaining CSC's storage services.
Thereafter, Ari Lukkarinen acted as a CSC's chief enterprise architect. During the last years Ari Lukkarinen has been working with international storage related projects (EUDAT and Knowledge Exchange) and has been working with CSC's IT service management practices.Ari Lukkarinen has a masters degree in technology from Tampere University of Technology and PhD from Helsinki University of Technology.

 

Attachment Size
AlexThirifays.pdf (1.4 MB) 1.4 MB
JamieShiers.pdf (998.87 KB) 998.87 KB
MarcinOstasz.pdf (758.15 KB) 758.15 KB