- De
- En
Lehrende und Promovierende
Veröffentlichungen
- Performance Evaluation of Open-Source Serverless Platforms for Kubernetes
(Jonathan Decker, Piotr Kasprzak, Julian Kunkel),
In Algorithms,
MDPI,
ISSN: 1999-4893,
2022-06-02
URL
DOI
PDF
BIBTEX
@article{PEOOSPFKDK22 abstract = {| Serverless computing has grown massively in popularity over the last few years, and has provided developers with a way to deploy function-sized code units without having to take care of the actual servers or deal with logging, monitoring, and scaling of their code. High-performance computing (HPC) clusters can profit from improved serverless resource sharing capabilities compared to reservation-based systems such as Slurm. However, before running self-hosted serverless platforms in HPC becomes a viable option, serverless platforms must be able to deliver a decent level of performance. Other researchers have already pointed out that there is a distinct lack of studies in the area of comparative benchmarks on serverless platforms, especially for open-source self-hosted platforms. This study takes a step towards filling this gap by systematically benchmarking two promising self-hosted Kubernetes- based serverless platforms in comparison. While the resulting benchmarks signal potential, they demonstrate that many opportunities for performance improvements in serverless computing are being left on the table.} author = {Jonathan Decker and Piotr Kasprzak and Julian Kunkel} doi = {https://doi.org/10.3390/a15070234} issn = {1999-4893} journal = {Algorithms} publisher = {MDPI} title = {Performance Evaluation of Open-Source Serverless Platforms for Kubernetes} url = {https://www.mdpi.com/1999-4893/15/7/234} year = {2022} month = {06} }
- Road Intersection Coordination Scheme for Mixed Traffic (Human-Driven and Driverless Vehicles): A Systematic Review
(Ekene F. Ozioko, Julian Kunkel, Frederic Stahl),
In Journal of Advanced Transportation,
Hindawi,
2022-05-30
DOI
PDF
BIBTEX
@article{RICSFMTADV22 abstract = {| Autonomous vehicles (AVs) are emerging with enormous potentials to solve many challenging road traffic problems. The AV emergence leads to a paradigm shift in the road traffic system, making the penetration of autonomous vehicles fast and its coexistence with human-driven cars inevitable. The migration from the traditional driving to the intelligent driving system with AV’s gradual deployment needs supporting technology to address mixed traffic systems problems, mixed driving behaviour in a car-following model, variation in-vehicle type control means, the impact of a proportion of AV in traffic mixed traffic, and many more. The migration to fully AV will solve many traffic problems: desire to reclaim travel and commuting time, driving comfort, and accident reduction. Motivated by the above facts, this paper presents an extensive review of road intersection mixed traffic management techniques with a classification matrix of different traffic management strategies and technologies that could effectively describe a mix of human and autonomous vehicles. It explores the existing traffic control strategies and analyses their compatibility in a mixed traffic environment. Then review their drawback and build on it for the proposed robust mix of traffic management schemes. Though many traffic control strategies have been in existence, the analysis presented in this paper gives new insights to the readers on the applications of the cell reservation strategy in a mixed traffic environment. Though many traffic control strategies have been in existence, the Gipp’s car-following model has shown to be very effective for optimal traffic flow performance.} author = {Ekene F. Ozioko and Julian Kunkel and Frederic Stahl} doi = {https://doi.org/10.1155/2022/2951999} journal = {Journal of Advanced Transportation} publisher = {Hindawi} title = {Road Intersection Coordination Scheme for Mixed Traffic (Human-Driven and Driverless Vehicles): A Systematic Review} year = {2022} month = {05} }
- Improve the Deep Learning Models in Forestry Based on Explanations and Expertise
(Ximeng Cheng, Ali Doosthosseini, Julian Kunkel),
In Frontiers in Plant Science,
Schloss Dagstuhl -- Leibniz-Zentrum für Informatik,
ISSN: 1664-462X,
2022-05-01
DOI
PDF
BIBTEX
@article{ITDLMIFBOE22 abstract = {"In forestry studies, deep learning models have achieved excellent performance in many application scenarios (e.g., detecting forest damage). However, the unclear model decisions (i.e., black-box) undermine the credibility of the results and hinder their practicality. This study intends to obtain explanations of such models through the use of explainable artificial intelligence methods, and then use feature unlearning methods to improve their performance, which is the first such attempt in the field of forestry. Results of three experiments show that the model training can be guided by expertise to gain specific knowledge, which is reflected by explanations. For all three experiments based on synthetic and real leaf images, the improvement of models is quantified in the classification accuracy (up to 4.6%) and three indicators of explanation assessment (i.e., root-mean-square error, cosine similarity, and the proportion of important pixels). Besides, the introduced expertise in annotation matrix form was automatically created in all experiments. This study emphasizes that studies of deep learning in forestry should not only pursue model performance (e.g., higher classification accuracy) but also focus on the explanations and try to improve models according to the expertise."} author = {Ximeng Cheng and Ali Doosthosseini and Julian Kunkel} doi = {https://doi.org/10.3389/fpls.2022.902105} issn = {1664-462X} journal = {Frontiers in Plant Science} publisher = {Schloss Dagstuhl -- Leibniz-Zentrum für Informatik} title = {Improve the Deep Learning Models in Forestry Based on Explanations and Expertise} year = {2022} month = {05} }
- Predicting Stock Price Changes Based on the Limit Order Book: A Survey
(Ilia Zaznov, Julian Kunkel, Alfonso Dufour, Atta Badii),
In Mathematics,
Series: 1234,
MDPI,
ISSN: 2227-7390,
2022-04-01
URL
DOI
PDF
BIBTEX
@article{PSPCBOTLOB22 abstract = {"This survey starts with a general overview of the strategies for stock price change predictions based on market data and in particular Limit Order Book (LOB) data. The main discussion is devoted to the systematic analysis, comparison, and critical evaluation of the state-of-the-art studies in the research area of stock price movement predictions based on LOB data. LOB and Order Flow data are two of the most valuable information sources available to traders on the stock markets. Academic researchers are actively exploring the application of different quantitative methods and algorithms for this type of data to predict stock price movements. With the advancements in machine learning and subsequently in deep learning, the complexity and computational intensity of these models was growing, as well as the claimed predictive power. Some researchers claim accuracy of stock price movement prediction well in excess of 80%. These models are now commonly employed by automated market-making programs to set bids and ask quotes. If these results were also applicable to arbitrage trading strategies, then those algorithms could make a fortune for their developers. Thus, the open question is whether these results could be used to generate buy and sell signals that could be exploited with active trading. Therefore, this survey paper is intended to answer this question by reviewing these results and scrutinising their reliability. The ultimate conclusion from this analysis is that although considerable progress was achieved in this direction, even the state-of-art models can not guarantee a consistent profit in active trading. Taking this into account several suggestions for future research in this area were formulated along the three dimensions: input data, model’s architecture, and experimental setup. In particular, from the input data perspective, it is critical that the dataset is properly processed, up-to-date, and its size is sufficient for the particular model training. From the model architecture perspective, even though deep learning models are demonstrating a stronger performance than classical models, they are also more prone to over-fitting. To avoid over-fitting it is suggested to optimize the feature space, as well as a number of layers and neurons, and apply dropout functionality. The over-fitting problem can be also addressed by optimising the experimental setup in several ways: Introducing the early stopping mechanism; Saving the best weights of the model achieved during the training; Testing the model on the out-of-sample data, which should be separated from the validation and training samples. Finally, it is suggested to always conduct the trading simulation under realistic market conditions considering transactions costs, bid–ask spreads, and market impact. View Full-Text"} author = {Ilia Zaznov and Julian Kunkel and Alfonso Dufour and Atta Badii} doi = {https://doi.org/10.3390/math10081234} editor = {} issn = {2227-7390} journal = {Mathematics} publisher = {MDPI} series = {1234} title = {Predicting Stock Price Changes Based on the Limit Order Book: A Survey} url = {https://www.mdpi.com/2227-7390/10/8/1234} year = {2022} month = {04} }
- Performance Evaluation of Open-Source Serverless Platforms for Kubernetes
(Jonathan Decker, Piotr Kasprzak, Julian Martin Kunkel),
2022-01-01
DOI
PDF
BIBTEX
@article{PEOOSPFKDK22 abstract = {"Serverless computing has grown massively in popularity over the last few years, and has provided developers with a way to deploy function-sized code units without having to take care of the actual servers or deal with logging, monitoring, and scaling of their code. High-performance computing (HPC) clusters can profit from improved serverless resource sharing capabilities compared to reservation-based systems such as Slurm. However, before running self-hosted serverless platforms in HPC becomes a viable option, serverless platforms must be able to deliver a decent level of performance. Other researchers have already pointed out that there is a distinct lack of studies in the area of comparative benchmarks on serverless platforms, especially for open-source self-hosted platforms. This study takes a step towards filling this gap by systematically benchmarking two promising self-hosted Kubernetes-based serverless platforms in comparison. While the resulting benchmarks signal potential, they demonstrate that many opportunities for performance improvements in serverless computing are being left on the table."} author = {Jonathan Decker and Piotr Kasprzak and Julian Martin Kunkel} doi = {10.3390/a15070234} grolink = {https://resolver.sub.uni-goettingen.de/purl?gro-2/112598} title = {Performance Evaluation of Open-Source Serverless Platforms for Kubernetes} year = {2022} month = {01} }
- Canonical Workflow for Experimental Research
(Dirk Betz, Claudia Biniossek, Christophe Blanchi, Felix Henninger, Thomas Lauer, Philipp Wieder, Peter Wittenburg, Martin Zünkeler),
2022-01-01
DOI
BIBTEX
@article{2_121152 author = {Dirk Betz and Claudia Biniossek and Christophe Blanchi and Felix Henninger and Thomas Lauer and Philipp Wieder and Peter Wittenburg and Martin Zünkeler} doi = {10.1162/dint_a_00123} grolink = {https://resolver.sub.uni-goettingen.de/purl?gro-2/121152} title = {Canonical Workflow for Experimental Research} year = {2022} month = {01} }
- Predicting Stock Price Changes Based on the Limit Order Book: A Survey
(Ilia Zaznov, Julian Kunkel, Alfonso Dufour, Atta Badii),
2022-01-01
DOI
PDF
BIBTEX
@article{PSPCBOTLOB22 abstract = {"This survey starts with a general overview of the strategies for stock price change predictions based on market data and in particular Limit Order Book (LOB) data. The main discussion is devoted to the systematic analysis, comparison, and critical evaluation of the state-of-the-art studies in the research area of stock price movement predictions based on LOB data. LOB and Order Flow data are two of the most valuable information sources available to traders on the stock markets. Academic researchers are actively exploring the application of different quantitative methods and algorithms for this type of data to predict stock price movements. With the advancements in machine learning and subsequently in deep learning, the complexity and computational intensity of these models was growing, as well as the claimed predictive power. Some researchers claim accuracy of stock price movement prediction well in excess of 80%. These models are now commonly employed by automated market-making programs to set bids and ask quotes. If these results were also applicable to arbitrage trading strategies, then those algorithms could make a fortune for their developers. Thus, the open question is whether these results could be used to generate buy and sell signals that could be exploited with active trading. Therefore, this survey paper is intended to answer this question by reviewing these results and scrutinising their reliability. The ultimate conclusion from this analysis is that although considerable progress was achieved in this direction, even the state-of-art models can not guarantee a consistent profit in active trading. Taking this into account several suggestions for future research in this area were formulated along the three dimensions: input data, model’s architecture, and experimental setup. In particular, from the input data perspective, it is critical that the dataset is properly processed, up-to-date, and its size is sufficient for the particular model training. From the model architecture perspective, even though deep learning models are demonstrating a stronger performance than classical models, they are also more prone to over-fitting. To avoid over-fitting it is suggested to optimize the feature space, as well as a number of layers and neurons, and apply dropout functionality. The over-fitting problem can be also addressed by optimising the experimental setup in several ways: Introducing the early stopping mechanism; Saving the best weights of the model achieved during the training; Testing the model on the out-of-sample data, which should be separated from the validation and training samples. Finally, it is suggested to always conduct the trading simulation under realistic market conditions considering transactions costs, bid–ask spreads, and market impact."} author = {Ilia Zaznov and Julian Kunkel and Alfonso Dufour and Atta Badii} doi = {10.3390/math10081234} grolink = {https://resolver.sub.uni-goettingen.de/purl?gro-2/107425} title = {Predicting Stock Price Changes Based on the Limit Order Book: A Survey} year = {2022} month = {01} }
- Realising Data-Centric Scientific Workflows with Provenance-Capturing on Data Lakes
(Hendrik Nolte, Philipp Wieder),
2022-01-01
DOI
BIBTEX
@article{2_121151 author = {Hendrik Nolte and Philipp Wieder} doi = {10.1162/dint_a_00141} grolink = {https://resolver.sub.uni-goettingen.de/purl?gro-2/121151} title = {Realising Data-Centric Scientific Workflows with Provenance-Capturing on Data Lakes} year = {2022} month = {01} }
- Toward data lakes as central building blocks for data management and analysis
(Philipp Wieder, Hendrik Nolte),
2022-01-01
DOI
BIBTEX
@article{2_114449 abstract = {"Data lakes are a fundamental building block for many industrial data analysis solutions and becoming increasingly popular in research. Often associated with big data use cases, data lakes are, for example, used as central data management systems of research institutions or as the core entity of machine learning pipelines. The basic underlying idea of retaining data in its native format within a data lake facilitates a large range of use cases and improves data reusability, especially when compared to the schema-on-write approach applied in data warehouses, where data is transformed prior to the actual storage to fit a predefined schema. Storing such massive amounts of raw data, however, has its very own challenges, spanning from the general data modeling, and indexing for concise querying to the integration of suitable and scalable compute capabilities. In this contribution, influential papers of the last decade have been selected to provide a comprehensive overview of developments and obtained results. The papers are analyzed with regard to the applicability of their input to data lakes that serve as central data management systems of research institutions. To achieve this, contributions to data lake architectures, metadata models, data provenance, workflow support, and FAIR principles are investigated. Last, but not least, these capabilities are mapped onto the requirements of two common research personae to identify open challenges. With that, potential research topics are determined, which have to be tackled toward the applicability of data lakes as central building blocks for research data management."} author = {Philipp Wieder and Hendrik Nolte} doi = {10.3389/fdata.2022.945720} grolink = {https://resolver.sub.uni-goettingen.de/purl?gro-2/114449} title = {Toward data lakes as central building blocks for data management and analysis} year = {2022} month = {01} }
- Road Intersection Coordination Scheme for Mixed Traffic (Human-Driven and Driverless Vehicles): A Systematic Review
(Ekene F. Ozioko, Julian Kunkel, Fredric Stahl),
2022-01-01
DOI
PDF
BIBTEX
@article{RICSFMTADV22 abstract = {"Autonomous vehicles (AVs) are emerging with enormous potentials to solve many challenging road traffic problems. The AV emergence leads to a paradigm shift in the road traffic system, making the penetration of autonomous vehicles fast and its coexistence with human-driven cars inevitable. The migration from the traditional driving to the intelligent driving system with AV’s gradual deployment needs supporting technology to address mixed traffic systems problems, mixed driving behaviour in a car-following model, variation in-vehicle type control means, the impact of a proportion of AV in traffic mixed traffic, and many more. The migration to fully AV will solve many traffic problems: desire to reclaim travel and commuting time, driving comfort, and accident reduction. Motivated by the above facts, this paper presents an extensive review of road intersection mixed traffic management techniques with a classification matrix of different traffic management strategies and technologies that could effectively describe a mix of human and autonomous vehicles. It explores the existing traffic control strategies and analyses their compatibility in a mixed traffic environment. Then review their drawback and build on it for the proposed robust mix of traffic management schemes. Though many traffic control strategies have been in existence, the analysis presented in this paper gives new insights to the readers on the applications of the cell reservation strategy in a mixed traffic environment. Though many traffic control strategies have been in existence, the Gipp’s car-following model has shown to be very effective for optimal traffic flow performance."} author = {Ekene F. Ozioko and Julian Kunkel and Fredric Stahl} doi = {10.1155/2022/2951999} grolink = {https://resolver.sub.uni-goettingen.de/purl?gro-2/113814} title = {Road Intersection Coordination Scheme for Mixed Traffic (Human-Driven and Driverless Vehicles): A Systematic Review} year = {2022} month = {01} }
2022 🔗
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|---|---|
Metadata quality dashboard for the Deutsche Digitale Bibliothek | Prof. Ramin Yahyapour | BSc, MSc |
SSO Keycloak integration and self-services for a community portal | Prof. Ramin Yahyapour | BSc, MSc |
Knowledge Graphs and NLP techniques | Prof. Ramin Yahyapour | BSc, MSc |
Implementation of an API specification to enhance the functionality of an Text- and Datamining system | Prof. Ramin Yahyapour | BSc, MSc |
Token Management for an API to utilise HPC resources in generic workflows | Prof. Ramin Yahyapour | BSc, MSc |
Cluster on Demand with Kubernetes | Prof. Julian Kunkel | BSc, MSc, PhD |
Parallele Anwendungen mit Containern | Prof. Julian Kunkel | BSc, MSc, PhD |
Digital Twin of the data center: Erstellung eines 3D Modells für den GWDG Data Center für Begehungen in virtual reality | Prof. Julian Kunkel | BSc, MSc |
Digitale Lehere: Entwicklung von Prüfungszenarien für HPC-Kenntnisse | Prof. Julian Kunkel | BSc, MSc |
Entwicklung einer Provenance aware ad-hoc Schnittstelle für einen Data Lake | Prof. Julian Kunkel | BSc, MSc |
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Petra Gaugel
- Email:
- petra.gaugel@gwdg.de
Kompetenzen:
- HLRN/NHR
- Verwaltung
- Drittmittelprojekte
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Sadegh Keshtkar
- Email:
- sadegh.keshtkar@gwdg.de
Kompetenzen:
- HLRN/NHR
- Künstliche Intelligenz
- Deep Learning
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Dr. Freja Nordsiek
- Email:
- freja.nordsiek@gwdg.de
Kompetenzen:
- HLRN/NHR
- Administration
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Oskar Schirmer
- Email:
- oskar.schirmer@gwdg.de
Kompetenzen:
- HLRN/NHR
- Administration
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Dr. Muzzamil Aziz
- Email:
- muzzamil.aziz@gwdg.de
- Tel.:
- 0551 3930282
Veröffentlichungen
- NEPHELE: An End-to-End Scalable and Dynamically Reconfigurable Optical Architecture for Application-Aware SDN Cloud Data Centers
(Paraskevas Bakopoulos, Konstantinos Christodoulopoulos, Giada Landi, Muzzamil Aziz, Eitan Zahavi, Domenico Gallico, Richard Pitwon, Konstantinos Tokas, Ioannis Patronas, Marco Capitani, Christos Spatharakis, Konstantinos Yiannopoulos, Kai Wang, Konstantinos Kontodimas, Ioannis Lazarou, Philipp Wieder, Dionysios I. Reisis, Emmanouel Manos Varvarigos, Matteo Biancani, Hercules Avramopoulos),
2018-01-01
DOI
BIBTEX
@article{2_90984 author = {Paraskevas Bakopoulos and Konstantinos Christodoulopoulos and Giada Landi and Muzzamil Aziz and Eitan Zahavi and Domenico Gallico and Richard Pitwon and Konstantinos Tokas and Ioannis Patronas and Marco Capitani and Christos Spatharakis and Konstantinos Yiannopoulos and Kai Wang and Konstantinos Kontodimas and Ioannis Lazarou and Philipp Wieder and Dionysios I. Reisis and Emmanouel Manos Varvarigos and Matteo Biancani and Hercules Avramopoulos} doi = {10.1109/MCOM.2018.1600804} grolink = {https://resolver.sub.uni-goettingen.de/purl?gro-2/90984} title = {NEPHELE: An End-to-End Scalable and Dynamically Reconfigurable Optical Architecture for Application-Aware SDN Cloud Data Centers} year = {2018} month = {01} }
- Latency-Sensitive Data Allocation and Workload Consolidation for Cloud Storage
(Song Yang, Philipp Wieder, Muzzamil Aziz, Ramin Yahyapour, Xiaoming Fu, Xu Chen),
2018-01-01
DOI
BIBTEX
@article{2_93400 abstract = {"Customers often suffer from the variability of data access time in (edge) cloud storage service, caused by network congestion, load dynamics, and so on. One ef cient solution to guarantee a reliable latency-sensitive service (e.g., for industrial Internet of Things application) is to issue requests with multiple download/upload sessions which access the required data (replicas) stored in one or more servers, and use the earliest response from those sessions. In order to minimize the total storage costs, how to optimally allocate data in a minimum number of servers without violating latency guarantees remains to be a crucial issue for the cloud provider to deal with. In this paper, we study the latency-sensitive data allocation problem, the latency-sensitive data reallocation problem and the latency-sensitive workload consolidation problem for cloud storage. We model the data access time as a given distribution whose cumulative density function is known, and prove that these three problems are NP-hard. To solve them, we propose an exact integer nonlinear program (INLP) and a Tabu Search-based heuristic. The simulation results reveal that the INLP can always achieve the best performance in terms of lower number of used nodes and higher storage and throughput utilization, but this comes at the expense of much higher running time. The Tabu Searchbased heuristic, on the other hand, can obtain close-to-optimal performance, but in a much lower running time."} author = {Song Yang and Philipp Wieder and Muzzamil Aziz and Ramin Yahyapour and Xiaoming Fu and Xu Chen} doi = {10.1109/ACCESS.2018.2883674} grolink = {https://resolver.sub.uni-goettingen.de/purl?gro-2/93400} title = {Latency-Sensitive Data Allocation and Workload Consolidation for Cloud Storage} year = {2018} month = {01} }
- Latency-Sensitive Data Allocation for cloud storage
(Song Yang, Philipp Wieder, Muzzamil Aziz, Ramin Yahyapour, Xiaoming Fu),
In 2017 IFIP/IEEE Symposium on Integrated Network and Service Management (IM),
2017-01-01
DOI
BIBTEX
@inproceedings{2_13221 abstract = {"Customers often suffer from the variability of data access time in cloud storage service, caused by network congestion, load dynamics, etc. One solution to guarantee a reliable latency-sensitive service is to issue requests with multiple download/upload sessions, accessing the required data (replicas) stored in one or more servers. In order to minimize storage costs, how to optimally allocate data in a minimum number of servers without violating latency guarantees remains to be a crucial issue for the cloud provider to tackle. In this paper, we study the latency-sensitive data allocation problem for cloud storage. We model the data access time as a given distribution whose Cumulative Density Function (CDF) is known, and prove that this problem is NP-hard. To solve it, we propose both exact Integer Nonlinear Program (INLP) and Tabu Search-based heuristic. The proposed algorithms are evaluated in terms of the number of used servers, storage utilization and throughput utilization."} author = {Song Yang and Philipp Wieder and Muzzamil Aziz and Ramin Yahyapour and Xiaoming Fu} doi = {10.23919/INM.2017.7987258} grolink = {https://resolver.sub.uni-goettingen.de/purl?gro-2/13221} journal = {2017 IFIP/IEEE Symposium on Integrated Network and Service Management (IM)} title = {Latency-Sensitive Data Allocation for cloud storage} year = {2017} month = {01} }
- SDN-enabled application-aware networking for data center networks
(Muzzamil Aziz, H. Amirreza Fazely, Giada Landi, Domenico Gallico, Kostas Christodoulopoulos, Philipp Wieder),
2016-01-01
DOI
BIBTEX
@inproceedings{2_90983 author = {Muzzamil Aziz and H. Amirreza Fazely and Giada Landi and Domenico Gallico and Kostas Christodoulopoulos and Philipp Wieder} doi = {10.1109/ICECS.2016.7841210} grolink = {https://resolver.sub.uni-goettingen.de/purl?gro-2/90983} title = {SDN-enabled application-aware networking for data center networks} year = {2016} month = {01} }
2018 🔗
2017 🔗
2016 🔗
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Aytaj Badirova
- Email:
- aytaj.badirova@gwdg.de
- Tel.:
- 0551 3930107
Veröffentlichungen
- An Optimized Single Sign-On Schema for Reliable Multi -Level Security Management in Clouds
(Aytaj Badirova, Shirin Dabbaghi, Faraz Fatemi-Moghaddam, Philipp Wieder, Ramin Yahyapour),
In Proceedings of FiCloud 2021 – 8th International Conference on Future Internet of Things and Cloud,
2021-01-01
DOI
BIBTEX
@inproceedings{2_121153 author = {Aytaj Badirova and Shirin Dabbaghi and Faraz Fatemi-Moghaddam and Philipp Wieder and Ramin Yahyapour} doi = {10.1109/FiCloud49777.2021.00014} grolink = {https://resolver.sub.uni-goettingen.de/purl?gro-2/121153} journal = {Proceedings of FiCloud 2021 – 8th International Conference on Future Internet of Things and Cloud} title = {An Optimized Single Sign-On Schema for Reliable Multi -Level Security Management in Clouds} year = {2021} month = {01} }
2021 🔗
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Dr. Bernhard Bandow
- Email:
- bernhard.bandow@gwdg.de
- Tel.:
- 0551 2012113
Bernhard Bandow erhielt sein Diplom in Physik an der Technischen Universität Berlin mit einer Arbeit an MD-Simulationen von Systemen mit begrenzter Geometrie. Anschließend promovierte er 2007 in Physikalischer Chemie an der Christian-Albrechts-Universität zu Kiel über die globale Geometrieoptimierung von Wasserclustern unter Verwendung genetischer Algorithmen. Nach einem Aufenthalt als Postdoc am Deutschen Institut für Kautschuktechnik (DIK) in Hannover wechselte er 2008 an die Leibniz Universität Hannover für das Rechenzentrum und die North German Supercomputing Alliance (HLRN). Ab 2011 arbeitete er am Rechenzentrum des Max-Planck-Instituts für Sonnensystemforschung in Göttingen. 2019 wechselte er als HPC-Koordinator des Göttingen Campus Institute for Dynamics of Biological Networks (CIDBN) zur GWDG.
Kompetenzen:
- Campus Institut für Dynamik biologischer Netzwerke
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Johannes Biermann
Herr Johannes Biermann ist seit dem 14. Februar 2022 in der AG Computing im Bereich Digital Humanities (DH) tätig. Ziel ist es, die HPC Nutzung in dieser Disziplin zu etablieren. Vorher hat er verschiedene Projekte im DH-Kontext an der SUB Göttingen durchgeführt. Johannes Biermann hat „Informationstechnik - Betriebliche Informationssysteme“ an der Dualen Hochschule Baden-Württemberg Stuttgart studiert. Danach arbeite er als eBusiness Spezialist in einer privaten Firma. Anschließend machte er 2013 seinen Master an der Staatlichen Akademie der Bildenden Künste in Stuttgart im Bereich „Conservation of New Media and Digital Information“.
Kompetenzen:
- Datalake
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Dr. Sven Bingert
Stellvertretender Leiter der Arbeitsgruppe
- Email:
- sven.bingert@gwdg.de
- Tel.:
- 0551 3930278
Veröffentlichungen
- A Graph Database for Persistent Identifiers
(Triet Doan, Sven Bingert, Lena Wiese, Ramin Yahyapour),
In Proceedings of the Conference on "Lernen, Wissen, Daten, Analysen",
2019-01-01
DOI
BIBTEX
@inproceedings{2_91394 author = {Triet Doan and Sven Bingert and Lena Wiese and Ramin Yahyapour} doi = {10.15488/9817} grolink = {https://resolver.sub.uni-goettingen.de/purl?gro-2/91394} journal = {Proceedings of the Conference on "Lernen, Wissen, Daten, Analysen"} title = {A Graph Database for Persistent Identifiers} year = {2019} month = {01} }
2019 🔗
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|---|---|
SSO Keycloak integration and self-services for a community portal | Prof. Ramin Yahyapour | BSc, MSc |
Implementation of an API specification to enhance the functionality of an Text- and Datamining system | Prof. Ramin Yahyapour | BSc, MSc |
Token Management for an API to utilise HPC resources in generic workflows | Prof. Ramin Yahyapour | BSc, MSc |
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Dr. Christian Boehme
Stellvertretender Abteilungsleiter Computing (AG C)
- Email:
- christian.boehme@gwdg.de
- Tel.:
- 0551 2011839
Christian Böhme ist seit 2003 bei der GWDG und hat das Ressourcenmanagement für den ersten Linux-basierten HPC-Cluster der GWDG eingeführt und betrieben. Er hat die Planung und den Betrieb mehrerer HPC-Systeme koordiniert, darunter das NHR-System „Emmy“, sowie das Modular Data Center (MDC) der Universität Göttingen. Er hat auch nationale Forschungsprojekte zu HPC-as-a-Service und Leistungsüberwachung koordiniert. Vorherige Karrierestationen waren die Universität Bochum bei Dominik Marx, die Universität Straßburg bei Georges Wipff und die Universität Marburg, wo er bei Gernot Frenking in Computerchemie promovierte.
Kompetenzen:
- Infrastruktur
- Betrieb
- HPC-as-a-Service
- Parallelisierungstechniken
- Container-Virtualisierung
- Datenmanagement
- Ressourcen Management
- Leistungsüberwachung
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|---|---|
Cluster on Demand with Kubernetes | Prof. Julian Kunkel | BSc, MSc, PhD |
Parallele Anwendungen mit Containern | Prof. Julian Kunkel | BSc, MSc, PhD |
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Jonathan Boginski
Jonathan Boginski ist seit dem 01.02.2022 im HPC-Team der AG-C der GWDG tätig. Er studiert aktuell im 5. Semester Wirtschaftsinformatik und unterstützt das HPC-Team als studentische Hilfskraft beim Aufbau des Datalakes, welcher zusammen mit dem Max-Planck-Institut für Chemeische Energiekonversion entwickelt wird. Jonathan Boginski interessiert sich für verschiedenste Themen; von Musik, Videospielen und Sport bishin zu Wirtschafttheorie, IT-Automation und Appentwicklung ist alles dabei.
Kompetenzen:
- Datalake
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Antonia Colán Bräunig
- Email:
- antonia.braeunig@gwdg.de
- Tel.:
- 0551 3930115
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Triet Ho Anh Doan
- Email:
- triet.doan@gwdg.de
- Tel.:
- 0551 3930261
Veröffentlichungen
- OCR-D kompakt: Ergebnisse und Stand der Forschung in der Förderinitiative
(Konstantin Baierer, Matthias Boenig, Elisabeth Engl, Clemens Neudecker, Reinhard Altenhöner, Alexander Geyken, Johannes Mangei, Rainer Stotzka, Andreas Dengel, Martin Jenckel, Alexander Gehrke, Frank Puppe, Stefan Weil, Robert Sachunsky, Lena K. Schiffer, Maciej Janicki, Gerhard Heyer, Florian Fink, Klaus U. Schulz, Nikolaus Weichselbaumer, Saskia Limbach, Mathias Seuret, Rui Dong, Manuel Burghardt, Vincent Christlein, Triet Ho Anh Doan, Zeki Mustafa Dogan, Jörg-Holger Panzer, Kristine Schima-Voigt, Philipp Wieder),
2020-01-01
URL
DOI
BIBTEX
@misc{2_121682 abstract = {"Bereits seit einigen Jahren werden große Anstrengungen unternommen, um die im deutschen Sprachraum erschienenen Drucke des 16.-18. Jahrhunderts zu erfassen und zu digitalisieren. Deren Volltexttransformation konzeptionell und technisch vorzubereiten, ist das übergeordnete Ziel des DFG-Projekts OCR-D, das sich mit der Weiterentwicklung von Verfahren der Optical Character Recognition befasst. Der Beitrag beschreibt den aktuellen Entwicklungsstand der OCR-D-Software und analysiert deren erste Teststellung in ausgewählten Bibliotheken."} author = {Konstantin Baierer and Matthias Boenig and Elisabeth Engl and Clemens Neudecker and Reinhard Altenhöner and Alexander Geyken and Johannes Mangei and Rainer Stotzka and Andreas Dengel and Martin Jenckel and Alexander Gehrke and Frank Puppe and Stefan Weil and Robert Sachunsky and Lena K. Schiffer and Maciej Janicki and Gerhard Heyer and Florian Fink and Klaus U. Schulz and Nikolaus Weichselbaumer and Saskia Limbach and Mathias Seuret and Rui Dong and Manuel Burghardt and Vincent Christlein and Triet Ho Anh Doan and Zeki Mustafa Dogan and Jörg-Holger Panzer and Kristine Schima-Voigt and Philipp Wieder} doi = {10.18452/21548} grolink = {https://resolver.sub.uni-goettingen.de/purl?gro-2/121682} title = {OCR-D kompakt: Ergebnisse und Stand der Forschung in der Förderinitiative} url = {https://publications.goettingen-research-online.de/handle/2/116509} year = {2020} month = {01} }
- OLA-HD – Ein OCR-D-Langzeitarchiv für historische Drucke
(Triet Ho Anh Doan, Zeki Mustafa Doğan, Jörg-Holger Panzer, Kristine Schima-Voigt, Philipp Wieder),
2020-01-01
DOI
BIBTEX
@article{2_116509 author = {Triet Ho Anh Doan and Zeki Mustafa Doğan and Jörg-Holger Panzer and Kristine Schima-Voigt and Philipp Wieder} doi = {10.18452/21548} grolink = {https://resolver.sub.uni-goettingen.de/purl?gro-2/116509} title = {OLA-HD – Ein OCR-D-Langzeitarchiv für historische Drucke} year = {2020} month = {01} }
2020 🔗
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects
George Dogaru
- Email:
- george.dogaru@gwdg.de
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Dr. Tim Ehlers
- Email:
- tim.ehlers@gwdg.de
- Tel.:
- 0551 2011520
Kompetenzen:
- Administration
- Sicherheit
- Datenwissenschaft
- Wirtschaft
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Dr. Vanessa End
- Email:
- vanessa.end@gwdg.de
- Tel.:
- 0551 3930110
Vanessa End studierte Mathematik an der Universität Göttingen und schloss 2012 ihr Diplom mit der Arbeit „Sicherheitsaspekte in der Fingerabdruckerkennung – Eine statistische Attacke auf das Fuzzy Vault“ ab. 2016 schloss sie ihre Dissertation „Über kollektive Kommunikation und benachrichtigtes Lesen im Global Address Space Programming Interface (GASPI).". Seitdem ist sie am ProfiT-HPC-Projekt, der Organisation des Euro-Par 2019, der Lehre und der HPC-PR und -Veröffentlichungen beteiligt.
Kompetenzen:
- Algorithmen und verteilte Kommunikationsmuster
- Lehren
- Dokumentation
- Netz
Veröffentlichungen
- Butterfly-like Algorithms for GASPI Split Phase Allreduce
(Vanessa End, Ramin Yahyapour, Thomas Alrutz, Christian Simmendinger),
2016-01-01
BIBTEX
@article{2_57535 abstract = {"Collective communication routines pose a significant bottleneck of highly parallel programs. Research on different algorithms for disseminating information among all participat- ing processes in a collective communication has brought forth many different algorithms, some of which have a butterfly- like communication scheme. While these algorithms have been abandoned from usage in collective communication routines with larger messages, due to the congestion that arises from their use, these algorithms have ideal properties for split-phase allreduce routines: all processes are involved in the computation of the result in each communication round and they have few communication rounds. This article will present several different algorithms with a butterfly-like communication scheme and examine their usability for a GASPI allreduce library routine. The library routines will be compared to state-of-the-art MPI implementations and also to a tree-based allreduce algorithm."} author = {Vanessa End and Ramin Yahyapour and Thomas Alrutz and Christian Simmendinger} grolink = {https://resolver.sub.uni-goettingen.de/purl?gro-2/57535} title = {Butterfly-like Algorithms for GASPI Split Phase Allreduce} year = {2016} month = {01} }
- Adaption of the n-way Dissemination Algorithm for GASPI Split-Phase Allreduce
(Vanessa End, Ramin Yahyapour, Christian Simmendinger, Thomas Alrutz),
In The Fifth International Conference on Advanced Communications and Computation (INFOCOMP 2015),
2015-01-01
BIBTEX
@inproceedings{2_57541 abstract = {"This paper presents an adaption of the n-way dissemination algorithm, such that it can be used for an allreduce operation, which is - together with the barrier operation - one of the most time consuming collective communication routines available in most parallel communication interfaces and libraries. Thus, a fast underlying algorithm with few communication rounds is needed. The dissemination algorithm is such an algorithm and already used for a variety of barrier implementations due to its speed. Yet, this algorithm is also interesting for the split-phase allreduce operations, as defined in the Global Address Space Programming Interface (GASPI) specification, due to its small number of communication rounds. Even though it is a butterflylike algorithm, significant improvements in runtime are seen when comparing this implementation on top of ibverbs to different message-passing interface (MPI) implementations, which are the de facto standard for distributed memory computing."} address = {Red Hook, USA} author = {Vanessa End and Ramin Yahyapour and Christian Simmendinger and Thomas Alrutz} grolink = {https://resolver.sub.uni-goettingen.de/purl?gro-2/57541} journal = {The Fifth International Conference on Advanced Communications and Computation (INFOCOMP 2015)} title = {Adaption of the n-way Dissemination Algorithm for GASPI Split-Phase Allreduce} year = {2015} month = {01} }
2016 🔗
2015 🔗
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Dr. Laura Endter
- Email:
- laura.endter@gwdg.de
- Tel.:
- 0551 3930128
Kompetenzen:
- DLR
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Amirreza Fazely Hamedani
- Email:
- amirreza.fazely.hamedani@gwdg.de
- Tel.:
- 0551 3930259
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects
Lukas Friedrich
- Email:
- lukas.friedrich@gwdg.de
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Alexander Goldmann
Alexander Goldmann studierte Medienmanagement (B.A.) an der Ostfalia Hochschule für Angewandte Wissenschaften in Salzgitter. Anschließend absolvierte er eine Ausbildung zum PR-Berater in der Berliner Werbeagentur „Zum goldenen Hirschen“. Zudem war er als Community Manager für den Aufbau und die Pflege der Community im Berliner Coworking Space St. Oberholz sowie für das gesamte Marketing und die PR verantwortlich. Im Rahmen dieser Tätigkeit sammelte er auch Erfahrungen in der Betreuung verschiedener Communitymitglieder. Neben seiner Tätigkeit als Community Manager war er auch für die Planung und Durchführung von Veranstaltungen, Schulungen, Workshops und sonstigen Events für interne und externe Teilnehmer verantwortlich.
Kompetenzen:
- Community Manager
- Public Relations
- Eventmanagement
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Hauke Gronenberg
- Email:
- hauke.gronenberg@gwdg.de
Nachdem Hauke seinen Bachelor in Biologie an der Universität Göttingen abgeschlossen hat, spezialisierte er sich während seines Masters Forest Information Technology an der HNE Ebwerswalde und der Warschauer Naturwissenschaftlichen Universität (SGGW) im Bereich Remote Sensing. In seiner Masterarbeit am Helmholtz-Zentrum für Umweltforschung in Leipzig analysierte er luftgestützte lidar Daten mit dem Ziel der Einzelbaumerkennung und Artenklassifizierung auf großen Flächen.
Kompetenzen:
- ForestCare
- Maschinelles Lernen
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Marcel Hellkamp
- Email:
- marcel.hellkamp@gwdg.de
- Tel.:
- 0551 3930281
Veröffentlichungen
- menoci: Lightweight Extensible Web Portal enabling FAIR Data Management for Biomedical Research Projects
(Markus Suhr, Christoph Lehmann, Christian Robert Bauer, Theresa Bender, Cornelius Knopp, Luca Freckmann, Björn Öst Hansen, Christian Henke, Georg Aschenbrandt, Lea Katharina Kühlborn, Sophia Rheinländer, Linus Weber, Bartlomiej Marzec, Marcel Hellkamp, Philipp Wieder, Harald Kusch, Ulrich Sax, Sara Yasemin Nussbeck),
2020-01-01
URL
BIBTEX
@misc{2_63412 abstract = {"Background: Biomedical research projects deal with data management requirements from multiple sources like funding agencies' guidelines, publisher policies, discipline best practices, and their own users' needs. We describe functional and quality requirements based on many years of experience implementing data management for the CRC 1002 and CRC 1190. A fully equipped data management software should improve documentation of experiments and materials, enable data storage and sharing according to the FAIR Guiding Principles while maximizing usability, information security, as well as software sustainability and reusability. Results: We introduce the modular web portal software menoci for data collection, experiment documentation, data publication, sharing, and preservation in biomedical research projects. Menoci modules are based on the Drupal content management system which enables lightweight deployment and setup, and creates the possibility to combine research data management with a customisable project home page or collaboration platform. Conclusions: Management of research data and digital research artefacts is transforming from individual researcher or groups best practices towards project- or organisation-wide service infrastructures. To enable and support this structural transformation process, a vital ecosystem of open source software tools is needed. Menoci is a contribution to this ecosystem of research data management tools that is specifically designed to support biomedical research projects."} author = {Markus Suhr and Christoph Lehmann and Christian Robert Bauer and Theresa Bender and Cornelius Knopp and Luca Freckmann and Björn Öst Hansen and Christian Henke and Georg Aschenbrandt and Lea Katharina Kühlborn and Sophia Rheinländer and Linus Weber and Bartlomiej Marzec and Marcel Hellkamp and Philipp Wieder and Harald Kusch and Ulrich Sax and Sara Yasemin Nussbeck} grolink = {https://resolver.sub.uni-goettingen.de/purl?gro-2/63412} title = {menoci: Lightweight Extensible Web Portal enabling FAIR Data Management for Biomedical Research Projects} url = {https://sfb1190.med.uni-goettingen.de/production/literature/publications/106} year = {2020} month = {01} }
2020 🔗
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Christoph Hottenroth
Christoph Hottenroth ist als technischer Mitarbeiter in der Arbeitsgruppe „Computing“ (AG C) tätig. Er arbeitet dort als IT-Systembetreuer im Team, das den neuen DLR-Supercomputer "CARO" betreibt. Christoph hat längere Zeit als IT-Systemadministrator im Bereich Windows-Administration gearbeitet und dabei alle Bereiche von Hardware und Virtualisierung über Netzwerk bis hin zur Spezialsoftware-Administration abgedeckt. Seine spezieller Fokus lag bisher im Bereich Microsoft Exchange.
Kompetenzen:
- Administration/Systembetreuung
- Support
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Dr. Tibor Kálmán
- Email:
- tibor.kalman@gwdg.de
- Tel.:
- 0551 3930266
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Dr. Nils Kanning
- Email:
- nils.kanning@gwdg.de
- Tel.:
- 0551 3930335
Nils Kanning studierte Physik an der Universität Göttingen und promovierte in mathematischer Physik an der Humboldt-Universität zu Berlin. Seine Forschung im Bereich integrierbarer Modelle setzte er als Postdoc an der Ludwig-Maximilians-Universität München fort. Bei der GWDG ist er nun Teil des Teams, das das DLR-HPC-System „Caro“ betreibt und betreut. In dieser Funktion kümmert er sich um Forschungskooperationen und Öffentlichkeitsarbeit für das System.
Kompetenzen:
- DLR
- Administration
- Research Collaboration
- PR
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Ruben Kellner
- Email:
- ruben.kellner@gwdg.de
Ruben Kellner hat sich in seiner beruflichen Laufbahn vielfältige technische Erfahrungen angeeignet. Mit seiner Ausbildung zum Fachinformatiker Softwareentwicklung startete er im IT-Bereich, verlagerte sich aber bald in die Betreuung von Anwendern, Software und Industriemaschinen. Dieses breite Spektrum wandte er an, um Anwender zu schulen, Maschinen und Software zu bedienen, eigene Produkte zu entwickeln.
Kompetenzen:
- Training
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Sabih Ahmed Khan
- Email:
- sabih-ahmed.khan@gwdg.de
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Azat Khuziyakhmetov
- Email:
- azat.khuziyakhmetov@gwdg.de
- Tel.:
- 0551 20126802
Azat Khuziyakhmetov studierte Angewandte Mathematik und Informatik an der Staatlichen Universität Moskau, setzte seinen Masterstudiengang Internet-Technologien und Informationssysteme (ITIS) an der Universität Göttingen fort und schloss mit der Masterarbeit "Anomalieerkennung der GPU-Nutzung mit neuronalen Netzen" ab. Er arbeitete als Softwareentwickler und Administrator. An der Universität Göttingen war er an der Lehre der Kurse „Algorithms for Programming Contests“ und „Parallel Computing“ beteiligt. In der GWDG war er am Projekt ProfiT-HPC beteiligt. Arbeitet derzeit im DLR-Team und verwaltet mehrere HPC-Cluster.
Kompetenzen:
- DLR
- Administration
- Monitoring
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|---|---|
Vergleich der Leistung von Remote-Visualisierungstechniken | Prof. Julian Kunkel | BSc, MSc |
Empfehlungssystem für die Leistungsüberwachung und -analyse im HPC | Prof. Julian Kunkel | BSc, MSc |
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Ph.D. Peter Király
- Email:
- peter.kiraly@gwdg.de
- Tel.:
- 0551 3920468
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Sebastian Krey
- Email:
- sebastian.krey@gwdg.de
- Tel.:
- 0551 3930277
Sebastian Krey studierte Statistik an der Technischen Universität Dortmund mit Nebenfach Physik und Hauptfach Technometrie. Sein Studium schloss er 2008 mit der Diplomarbeit „SVM-basierte Schallklassifikation“ ab. Danach war er Stipendiat des Graduiertenkollegs „Statistische Modellierung“ und wissenschaftlicher Mitarbeiter in der DFG-Forschungsgruppe 1511 „Schutz- und Steuerungssysteme für zuverlässige und sichere elektrische Energieübertragung“. Von 2015 bis 2019 war er wissenschaftlicher Mitarbeiter am Institut für Data Science, Engineering, and Analytics der Technischen Universität Köln und arbeitete an verschiedenen Projekten zu Statistik und maschinellen Lernmethoden und wandte seine Erfahrungen in angewandter Mathematik und Statistik auf Mathematik und Daten an naturwissenschaftliche Ausbildung der Studiengänge Ingenieurwissenschaften und Informatik eingeführt.
Kompetenzen:
- HLRN/NHR
- Administration
- HPC-Infrastruktur (Speicher, Verbindungen, Kühlung)
- Datenwissenschaft
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Dhiraj Kumar
- Email:
- dhiraj.kumar@gwdg.de
Dhiraj ist als wissenschaftliche Hilfskraft in der Arbeitsgruppe „Computing“ tätig. Dort unterstützt er das HPC-Team im Bereich Monitoring-Software. Nach seinem Bachelor-Studium der Mathematik und Postgraduate Diploma im Bereich Data Science hat er sich weiter mit Datenanalyse- und Visualisierungsmethoden mit Hilfe von R und Python befasst.
Kompetenzen:
- Monitoring
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Prof. Dr. Julian Kunkel
Abteilungsleiter Computing (AG C)
- Email:
- julian.kunkel@gwdg.de
- Sprechstunde:
- Mo-Fr 21:00 - 24:00
Dr. Kunkel ist Professor für Hochleistungsrechnen an der Universität Göttingen, stellvertretender Leiter der GWDG und Gruppenleiter der Arbeitsgruppe Computing. Zuvor war er Dozent am Computer Science Department der University of Reading und Postdoc in der Forschungsabteilung des Deutschen Klimarechenzentrums (DKRZ). Julian interessierte sich 2003 während seines Informatikstudiums für das Thema HPC-Storage. Neben seinem Hauptziel, effiziente und leistungsfähige E/A bereitzustellen, sind seine HPC-bezogenen Interessen Datenreduktionstechniken, Leistungsanalyse von parallelen Anwendungen und parallelem E/A, Management von Clustersystemen, Kosteneffizienzbetrachtungen und das Software Engineering wissenschaftlicher Software. Er ist Gründungsmitglied der IO500-Benchmarking-Bemühungen, des Virtual Institute for I/O und des HPC Certification Forum. Zudem engagiert sich Julian für Exzellenz in Forschung und Lehre.
Kompetenzen:
- Leistungsstarke Datenanalyse
- Datenmanagement
- Datengesteuerte Arbeitsabläufe
- Parallele Dateisysteme
- Anwendung von maschinellen Lernverfahren
- Performance-Portabilität
- Datenreduktionstechniken
- Verwaltung von Clustersystemen
- Leistungsanalyse paralleler Anwendungen und paralleler I/O
- Softwareentwicklung wissenschaftlicher Software
- Personalisierter Unterricht
Veröffentlichungen
- Performance Evaluation of Open-Source Serverless Platforms for Kubernetes
(Jonathan Decker, Piotr Kasprzak, Julian Kunkel),
In Algorithms,
MDPI,
ISSN: 1999-4893,
2022-06-02
URL
DOI
PDF
BIBTEX
@article{PEOOSPFKDK22 abstract = {| Serverless computing has grown massively in popularity over the last few years, and has provided developers with a way to deploy function-sized code units without having to take care of the actual servers or deal with logging, monitoring, and scaling of their code. High-performance computing (HPC) clusters can profit from improved serverless resource sharing capabilities compared to reservation-based systems such as Slurm. However, before running self-hosted serverless platforms in HPC becomes a viable option, serverless platforms must be able to deliver a decent level of performance. Other researchers have already pointed out that there is a distinct lack of studies in the area of comparative benchmarks on serverless platforms, especially for open-source self-hosted platforms. This study takes a step towards filling this gap by systematically benchmarking two promising self-hosted Kubernetes- based serverless platforms in comparison. While the resulting benchmarks signal potential, they demonstrate that many opportunities for performance improvements in serverless computing are being left on the table.} author = {Jonathan Decker and Piotr Kasprzak and Julian Kunkel} doi = {https://doi.org/10.3390/a15070234} issn = {1999-4893} journal = {Algorithms} publisher = {MDPI} title = {Performance Evaluation of Open-Source Serverless Platforms for Kubernetes} url = {https://www.mdpi.com/1999-4893/15/7/234} year = {2022} month = {06} }
- Road Intersection Coordination Scheme for Mixed Traffic (Human-Driven and Driverless Vehicles): A Systematic Review
(Ekene F. Ozioko, Julian Kunkel, Frederic Stahl),
In Journal of Advanced Transportation,
Hindawi,
2022-05-30
DOI
PDF
BIBTEX
@article{RICSFMTADV22 abstract = {| Autonomous vehicles (AVs) are emerging with enormous potentials to solve many challenging road traffic problems. The AV emergence leads to a paradigm shift in the road traffic system, making the penetration of autonomous vehicles fast and its coexistence with human-driven cars inevitable. The migration from the traditional driving to the intelligent driving system with AV’s gradual deployment needs supporting technology to address mixed traffic systems problems, mixed driving behaviour in a car-following model, variation in-vehicle type control means, the impact of a proportion of AV in traffic mixed traffic, and many more. The migration to fully AV will solve many traffic problems: desire to reclaim travel and commuting time, driving comfort, and accident reduction. Motivated by the above facts, this paper presents an extensive review of road intersection mixed traffic management techniques with a classification matrix of different traffic management strategies and technologies that could effectively describe a mix of human and autonomous vehicles. It explores the existing traffic control strategies and analyses their compatibility in a mixed traffic environment. Then review their drawback and build on it for the proposed robust mix of traffic management schemes. Though many traffic control strategies have been in existence, the analysis presented in this paper gives new insights to the readers on the applications of the cell reservation strategy in a mixed traffic environment. Though many traffic control strategies have been in existence, the Gipp’s car-following model has shown to be very effective for optimal traffic flow performance.} author = {Ekene F. Ozioko and Julian Kunkel and Frederic Stahl} doi = {https://doi.org/10.1155/2022/2951999} journal = {Journal of Advanced Transportation} publisher = {Hindawi} title = {Road Intersection Coordination Scheme for Mixed Traffic (Human-Driven and Driverless Vehicles): A Systematic Review} year = {2022} month = {05} }
- Improve the Deep Learning Models in Forestry Based on Explanations and Expertise
(Ximeng Cheng, Ali Doosthosseini, Julian Kunkel),
In Frontiers in Plant Science,
Schloss Dagstuhl -- Leibniz-Zentrum für Informatik,
ISSN: 1664-462X,
2022-05-01
DOI
PDF
BIBTEX
@article{ITDLMIFBOE22 abstract = {"In forestry studies, deep learning models have achieved excellent performance in many application scenarios (e.g., detecting forest damage). However, the unclear model decisions (i.e., black-box) undermine the credibility of the results and hinder their practicality. This study intends to obtain explanations of such models through the use of explainable artificial intelligence methods, and then use feature unlearning methods to improve their performance, which is the first such attempt in the field of forestry. Results of three experiments show that the model training can be guided by expertise to gain specific knowledge, which is reflected by explanations. For all three experiments based on synthetic and real leaf images, the improvement of models is quantified in the classification accuracy (up to 4.6%) and three indicators of explanation assessment (i.e., root-mean-square error, cosine similarity, and the proportion of important pixels). Besides, the introduced expertise in annotation matrix form was automatically created in all experiments. This study emphasizes that studies of deep learning in forestry should not only pursue model performance (e.g., higher classification accuracy) but also focus on the explanations and try to improve models according to the expertise."} author = {Ximeng Cheng and Ali Doosthosseini and Julian Kunkel} doi = {https://doi.org/10.3389/fpls.2022.902105} issn = {1664-462X} journal = {Frontiers in Plant Science} publisher = {Schloss Dagstuhl -- Leibniz-Zentrum für Informatik} title = {Improve the Deep Learning Models in Forestry Based on Explanations and Expertise} year = {2022} month = {05} }
- Predicting Stock Price Changes Based on the Limit Order Book: A Survey
(Ilia Zaznov, Julian Kunkel, Alfonso Dufour, Atta Badii),
In Mathematics,
Series: 1234,
MDPI,
ISSN: 2227-7390,
2022-04-01
URL
DOI
PDF
BIBTEX
@article{PSPCBOTLOB22 abstract = {"This survey starts with a general overview of the strategies for stock price change predictions based on market data and in particular Limit Order Book (LOB) data. The main discussion is devoted to the systematic analysis, comparison, and critical evaluation of the state-of-the-art studies in the research area of stock price movement predictions based on LOB data. LOB and Order Flow data are two of the most valuable information sources available to traders on the stock markets. Academic researchers are actively exploring the application of different quantitative methods and algorithms for this type of data to predict stock price movements. With the advancements in machine learning and subsequently in deep learning, the complexity and computational intensity of these models was growing, as well as the claimed predictive power. Some researchers claim accuracy of stock price movement prediction well in excess of 80%. These models are now commonly employed by automated market-making programs to set bids and ask quotes. If these results were also applicable to arbitrage trading strategies, then those algorithms could make a fortune for their developers. Thus, the open question is whether these results could be used to generate buy and sell signals that could be exploited with active trading. Therefore, this survey paper is intended to answer this question by reviewing these results and scrutinising their reliability. The ultimate conclusion from this analysis is that although considerable progress was achieved in this direction, even the state-of-art models can not guarantee a consistent profit in active trading. Taking this into account several suggestions for future research in this area were formulated along the three dimensions: input data, model’s architecture, and experimental setup. In particular, from the input data perspective, it is critical that the dataset is properly processed, up-to-date, and its size is sufficient for the particular model training. From the model architecture perspective, even though deep learning models are demonstrating a stronger performance than classical models, they are also more prone to over-fitting. To avoid over-fitting it is suggested to optimize the feature space, as well as a number of layers and neurons, and apply dropout functionality. The over-fitting problem can be also addressed by optimising the experimental setup in several ways: Introducing the early stopping mechanism; Saving the best weights of the model achieved during the training; Testing the model on the out-of-sample data, which should be separated from the validation and training samples. Finally, it is suggested to always conduct the trading simulation under realistic market conditions considering transactions costs, bid–ask spreads, and market impact. View Full-Text"} author = {Ilia Zaznov and Julian Kunkel and Alfonso Dufour and Atta Badii} doi = {https://doi.org/10.3390/math10081234} editor = {} issn = {2227-7390} journal = {Mathematics} publisher = {MDPI} series = {1234} title = {Predicting Stock Price Changes Based on the Limit Order Book: A Survey} url = {https://www.mdpi.com/2227-7390/10/8/1234} year = {2022} month = {04} }
- Predicting Stock Price Changes Based on the Limit Order Book: A Survey
(Ilia Zaznov, Julian Kunkel, Alfonso Dufour, Atta Badii),
2022-01-01
DOI
PDF
BIBTEX
@article{PSPCBOTLOB22 abstract = {"This survey starts with a general overview of the strategies for stock price change predictions based on market data and in particular Limit Order Book (LOB) data. The main discussion is devoted to the systematic analysis, comparison, and critical evaluation of the state-of-the-art studies in the research area of stock price movement predictions based on LOB data. LOB and Order Flow data are two of the most valuable information sources available to traders on the stock markets. Academic researchers are actively exploring the application of different quantitative methods and algorithms for this type of data to predict stock price movements. With the advancements in machine learning and subsequently in deep learning, the complexity and computational intensity of these models was growing, as well as the claimed predictive power. Some researchers claim accuracy of stock price movement prediction well in excess of 80%. These models are now commonly employed by automated market-making programs to set bids and ask quotes. If these results were also applicable to arbitrage trading strategies, then those algorithms could make a fortune for their developers. Thus, the open question is whether these results could be used to generate buy and sell signals that could be exploited with active trading. Therefore, this survey paper is intended to answer this question by reviewing these results and scrutinising their reliability. The ultimate conclusion from this analysis is that although considerable progress was achieved in this direction, even the state-of-art models can not guarantee a consistent profit in active trading. Taking this into account several suggestions for future research in this area were formulated along the three dimensions: input data, model’s architecture, and experimental setup. In particular, from the input data perspective, it is critical that the dataset is properly processed, up-to-date, and its size is sufficient for the particular model training. From the model architecture perspective, even though deep learning models are demonstrating a stronger performance than classical models, they are also more prone to over-fitting. To avoid over-fitting it is suggested to optimize the feature space, as well as a number of layers and neurons, and apply dropout functionality. The over-fitting problem can be also addressed by optimising the experimental setup in several ways: Introducing the early stopping mechanism; Saving the best weights of the model achieved during the training; Testing the model on the out-of-sample data, which should be separated from the validation and training samples. Finally, it is suggested to always conduct the trading simulation under realistic market conditions considering transactions costs, bid–ask spreads, and market impact."} author = {Ilia Zaznov and Julian Kunkel and Alfonso Dufour and Atta Badii} doi = {10.3390/math10081234} grolink = {https://resolver.sub.uni-goettingen.de/purl?gro-2/107425} title = {Predicting Stock Price Changes Based on the Limit Order Book: A Survey} year = {2022} month = {01} }
- Road Intersection Coordination Scheme for Mixed Traffic (Human-Driven and Driverless Vehicles): A Systematic Review
(Ekene F. Ozioko, Julian Kunkel, Fredric Stahl),
2022-01-01
DOI
PDF
BIBTEX
@article{RICSFMTADV22 abstract = {"Autonomous vehicles (AVs) are emerging with enormous potentials to solve many challenging road traffic problems. The AV emergence leads to a paradigm shift in the road traffic system, making the penetration of autonomous vehicles fast and its coexistence with human-driven cars inevitable. The migration from the traditional driving to the intelligent driving system with AV’s gradual deployment needs supporting technology to address mixed traffic systems problems, mixed driving behaviour in a car-following model, variation in-vehicle type control means, the impact of a proportion of AV in traffic mixed traffic, and many more. The migration to fully AV will solve many traffic problems: desire to reclaim travel and commuting time, driving comfort, and accident reduction. Motivated by the above facts, this paper presents an extensive review of road intersection mixed traffic management techniques with a classification matrix of different traffic management strategies and technologies that could effectively describe a mix of human and autonomous vehicles. It explores the existing traffic control strategies and analyses their compatibility in a mixed traffic environment. Then review their drawback and build on it for the proposed robust mix of traffic management schemes. Though many traffic control strategies have been in existence, the analysis presented in this paper gives new insights to the readers on the applications of the cell reservation strategy in a mixed traffic environment. Though many traffic control strategies have been in existence, the Gipp’s car-following model has shown to be very effective for optimal traffic flow performance."} author = {Ekene F. Ozioko and Julian Kunkel and Fredric Stahl} doi = {10.1155/2022/2951999} grolink = {https://resolver.sub.uni-goettingen.de/purl?gro-2/113814} title = {Road Intersection Coordination Scheme for Mixed Traffic (Human-Driven and Driverless Vehicles): A Systematic Review} year = {2022} month = {01} }
- User-Centric System Fault Identification Using IO500 Benchmark
(Radita Liem, Dmytro Povaliaiev, Jay Lofstead, Julian Kunkel, Christian Terboven),
pp. 35-40,
IEEE,
2021-12-01
DOI
PDF
BIBTEX
@inproceedings{USFIUIBLPL21 abstract = {"I/O performance in a multi-user environment is difficult to predict. Users do not know what I/O performance to expect when running and tuning applications. We propose to use the IO500 benchmark as a way to guide user expectations on their application’s performance and to aid identifying root causes of their I/O problems that might come from the system. Our experiments describe how we manage user expectation with IO500 and provide a mechanism for system fault identification. This work also provides us with information of the tail latency problem that needs to be addressed and granular information about the impact of I/O technique choices (POSIX and MPI-IO)."} author = {Radita Liem and Dmytro Povaliaiev and Jay Lofstead and Julian Kunkel and Christian Terboven} booktitle = {In 2021 IEEE/ACM Sixth International Parallel Data Systems Workshop (PDSW)} conference = {International Parallel Data Systems Workshop (PDSW)} doi = {https://doi.org/10.1109/PDSW54622.2021.00011} editor = {} location = {St. Louis} pages = {35-40} publisher = {IEEE} title = {User-Centric System Fault Identification Using IO500 Benchmark} year = {2021} month = {12} }
- Toward a Workflow for Identifying Jobs with Similar I/O Behavior Utilizing Time Series Analysis
(Julian Kunkel, Eugen Betke),
Series: Lecture Notes in Computer Science,
pp. 161–173,
Springer,
2021-11-01
DOI
PDF
BIBTEX
@inproceedings{TAWFIJWSIB21 abstract = {"One goal of support staff at a data center is to identify inefficient jobs and to improve their efficiency. Therefore, a data center deploys monitoring systems that capture the behavior of the executed jobs. While it is easy to utilize statistics to rank jobs based on the utilization of computing, storage, and network, it is tricky to find patterns in 100,000 jobs, i.e., is there a class of jobs that aren't performing well. Similarly, when support staff investigates a specific job in detail, e.g., because it is inefficient or highly efficient, it is relevant to identify related jobs to such a blueprint. This allows staff to understand the usage of the exhibited behavior better and to assess the optimization potential. In this article, our goal is to identify jobs similar to an arbitrary reference job. In particular, we sketch a methodology that utilizes temporal I/O similarity to identify jobs related to the reference job. Practically, we apply several previously developed time series algorithms. A study is conducted to explore the effectiveness of the approach by investigating related jobs for a reference job. The data stem from DKRZ's supercomputer Mistral and include more than 500,000 jobs that have been executed for more than 6 months of operation. Our analysis shows that the strategy and algorithms bear the potential to identify similar jobs, but more testing is necessary."} author = {Julian Kunkel and Eugen Betke} booktitle = {High Performance Computing: ISC High Performance 2021 International Workshops, Revised Selected Papers} conference = {ISC HPC} doi = {https://doi.org/10.1007/978-3-030-90539-2_10} editor = {} isbn = {978-3-030-90539-2} location = {Frankfurt, Germany} number = {12761} pages = {161–173} publisher = {Springer} series = {Lecture Notes in Computer Science} title = {Toward a Workflow for Identifying Jobs with Similar I/O Behavior Utilizing Time Series Analysis} year = {2021} month = {11} }
- Analyzing the Performance of the S3 Object Storage API for HPC Workloads
(Frank Gadban, Julian Kunkel),
In Applied Sciences,
Series: 11,
MDPI,
2021-09-14
URL
DOI
PDF
BIBTEX
@article{ATPOTSOSAF21 abstract = {| The line between HPC and Cloud is getting blurry: Performance is still the main driver in HPC, while cloud storage systems are assumed to offer low latency, high throughput, high availability, and scalability. The Simple Storage Service S3 has emerged as the de facto storage API for object storage in the Cloud. This paper seeks to check if the S3 API is already a viable alternative for HPC access patterns in terms of performance or if further performance advancements are necessary. For this purpose: (a) We extend two common HPC I/O benchmarks—the IO500 and MD-Workbench—to quantify the performance of the S3 API. We perform the analysis on the Mistral supercomputer by launching the enhanced benchmarks against different S3 implementations: on-premises (Swift, MinIO) and in the Cloud (Google, IBM. . . ). We find that these implementations do not yet meet the demanding performance and scalability expectations of HPC workloads. (b) We aim to identify the cause for the performance loss by systematically replacing parts of a popular S3 client library with lightweight replacements of lower stack components. The created S3Embedded library is highly scalable and leverages the shared cluster file systems of HPC infrastructure to accommodate arbitrary S3 client applications. Another introduced library, S3remote, uses TCP/IP for communication instead of HTTP; it provides a single local S3 gateway on each node. By broadening the scope of the IO500, this research enables the community to track the performance growth of S3 and encourage sharing best practices for performance optimization. The analysis also proves that there can be a performance convergence—at the storage level—between Cloud and HPC over time by using a high-performance S3 library like S3Embedded.} author = {Frank Gadban and Julian Kunkel} doi = {https://doi.org/10.3390/app11188540} journal = {Applied Sciences} publisher = {MDPI} series = {11} title = {Analyzing the Performance of the S3 Object Storage API for HPC Workloads} url = {https://www.mdpi.com/2076-3417/11/18/8540} year = {2021} month = {09} }
- Understanding I/O Behavior in Scientific and Data-Intensive Computing (Dagstuhl Seminar 21332)
(Philip Carns, Julian Kunkel, Kathryn Mohror, Martin Schulz),
In Dagstuhl Reports,
pp. 16-75,
Schloss Dagstuhl -- Leibniz-Zentrum für Informatik,
ISSN: 2192-5283,
2021-09-14
URL
DOI
PDF
BIBTEX
@article{UIBISADCSC21 abstract = {| Two key changes are driving an immediate need for deeper understanding of I/O workloads in high-performance computing (HPC): applications are evolving beyond the traditional bulk-synchronous models to include integrated multistep workflows, in situ analysis, artificial intelligence, and data analytics methods; and storage systems designs are evolving beyond a two-tiered file system and archive model to complex hierarchies containing temporary, fast tiers of storage close to compute resources with markedly different performance properties. Both of these changes represent a significant departure from the decades-long status quo and require investigation from storage researchers and practitioners to understand their impacts on overall I/O performance. Without an in-depth understanding of I/O workload behavior, storage system designers, I/O middleware developers, facility operators, and application developers will not know how best to design or utilize the additional tiers for optimal performance of a given I/O workload. The goal of this Dagstuhl Seminar was to bring together experts in I/O performance analysis and storage system architecture to collectively evaluate how our community is capturing and analyzing I/O workloads on HPC systems, identify any gaps in our methodologies, and determine how to develop a better in-depth understanding of their impact on HPC systems. Our discussions were lively and resulted in identifying critical needs for research in the area of understanding I/O behavior. We document those discussions in this report.} author = {Philip Carns and Julian Kunkel and Kathryn Mohror and Martin Schulz} doi = {https://doi.org/10.4230/DagRep.11.7.16} issn = {2192-5283} journal = {Dagstuhl Reports} pages = {16-75} publisher = {Schloss Dagstuhl -- Leibniz-Zentrum für Informatik} title = {Understanding I/O Behavior in Scientific and Data-Intensive Computing (Dagstuhl Seminar 21332)} url = {https://drops.dagstuhl.de/opus/volltexte/2021/15589} year = {2021} month = {09} }
2022 🔗
2021 🔗
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Dr. Christian Köhler
- Email:
- christian.koehler@gwdg.de
- Tel.:
- 0551 2012193
Christian Köhler studierte Physik an der Universität Göttingen und schloss 2011 sein Diplom mit der Arbeit „String-lokalisierte Felder und punktlokalisierte Ströme in masselosen Wigner-Darstellungen mit unendlichem Spin“ ab. 2015 schloss er seine Doktorarbeit „„Über die Lokalisationseigenschaften von Quantenfeldern mit Nullmasse und unendlichem Spin“". Er kam zur GWDG für die Softwareentwicklung im INF ADIR-Projekt und wechselte im Rahmen der Inbetriebnahme des HLRN-IV-Systems „Emmy“ in das HPC-Team. Seitdem berät er SCC- und HLRN-Nutzer zu wissenschaftlichen Anwendungen und ist für das Büro des HLRN-Verwaltungsrats tätig.
Kompetenzen:
- HLRN/NHR
- Administration
- Physikalische Chemie
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|---|---|
Authentifizierung im HPC über WebAPI | Prof. Julian Kunkel | BSc, MSc |
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Esteban Renato Lazo Huanqui
- Email:
- esteban.huanqui@gwdg.de
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Mattias Luber
- Email:
- mattias.luber@gwdg.de
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Dr. Jason Mansour
- Email:
- jason.mansour@gwdg.de
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Tino Meisel
- Email:
- tino.meisel@gwdg.de
Im Rahmen seines Promotionsvorhabens an der HU Berlin befasste sich Tino Meisel mit Grundlagenforschung im Bereich Optoelektronik und Epitaxie auf Halbleitern. Die im HPC-Bereich eingesetzten Anwendungen MATLAB und Wolfram Mathematica sowie die Programmiersprache Python nutzte er bereits für ein Data-Science-Projekt zur Analyse von SARS-CoV-2-Zeitreihen.
Kompetenzen:
- HLRN/NHR
- Datenwissenschaft
- Festkörperphysik
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Marcus Merz
- Email:
- marcus.merz@gwdg.de
Marcus Merz hat durch seine berufliche Laufbahn Erfahrungen in verschiedenen Bereichen der Technik gesammelt. Aufgrund seines Studiums der technischen Informatik und seiner Tätigkeit verfügt er über Kenntnisse auf allen Ebenen der Entwicklung von Hardwaredesigns mit FPGA sowie der Firmware-, Treiber- und Linux-Entwicklung für eingebettete Systeme. Hinzu kommen Erfahrungen im Aufbau, in der Integration und im Betrieb von Netzwerken und entsprechenden Komponenten im medizinischen Bereich. Dazu gehören auch der Aufbau und Service eines Echtzeit-Netzwerks und Kontrollsystems für einen Teilchenbeschleuniger in der Krebstherapie. In all diesen Bereichen war er auch für die technische und administrative Betreuung seiner Kollegen und Kunden verantwortlich.
Kompetenzen:
- HLRN/NHR
- Student Line Manager
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Rosemarie Meuer
- Email:
- rosemarie.meuer@gwdg.de
- Tel.:
- 0551 3930336
Kompetenzen:
- DLR
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Dr. Patrick Michaelis
Patrick Michaelis hat Finanz- und Wirtschaftsmathematik an der TU Braunschweig studiert und eine Promotion im Bereich angewandte Statistik und empirische Methoden an der Georg-August-Universität Göttingen erlangt. Nach der Promotion hat er als Data Scientist am GEOMAR Helmholtz Zentrum für Ozeanforschung Kiel gearbeitet. Dort hat er Machine Learning Methoden auf verschiedene Bereiche der Meeresforschung angewandt. Dabei hat er mit Daten aus verschiedenen Quellen gearbeitet, bspw. Fernerkundungsdaten, Sensordaten und Modelldaten. Er hat sowohl Erfahrung mit statistischen Modellen als auch mit verschiedenen Deep Learning Methoden.
Kompetenzen:
- Deep Learning
- Künstliche Intelligenz
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Dr. Stefanie Mühlhausen
Frau Stefanie Mühlhausen als wissenschaftliche Mitarbeiterin in der Arbeitsgruppe Computing (AG C) tätig. Sie unterstützt das Team bei wissenschaftlichen Aktivitäten und in der Lehre. Frau Mühlhausen hat an der Georg-August-Universität Biologie und Angewandte Informatik mit Schwerpunkt Bioinformatik studiert und am Max-Planck- Institut für biophysikalische Chemie über Charakterisika eukaryotischer Genomevolution promoviert. Nach ihrer Promotion hat Frau Mühlhausen am Milner Center for Evolution in Bath, UK geforscht sowie als Data Scientist in der Industrie gearbeitet. Zuletzt hat sie als wissenschaftliche Mitarbeiterin am Institut für Informatik an der Universität Göttingen in dem Ausgründungsprojekt "Genometation" mitgearbeitet. Frau Mühlhausen hat langjährige Erfahrungen mit dem Rechnen auf HPC-Systemen.
Kompetenzen:
- Lehre
- Administration
- Bioinformatik
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Mehmed Mustafa
- Email:
- mehmed.mustafa@gwdg.de
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Hendrik Nolte
- Email:
- hendrick.nolte@gwdg.de
- Tel.:
- 0551 2012119
Hendrik Nolte studierte Physik an der Universität Göttingen und schloss sein Masterstudium mit der Arbeit „Visualisierung und Analyse multidimensionaler Photoelektronenspektroskopiedaten“ ab. Er kam 2019 zur GWDG, um die allgemeine Entwicklung einer eigenen Data-Lake-Lösung zu unterstützen.
Kompetenzen:
- Data Lakes
Veröffentlichungen
- Realising Data-Centric Scientific Workflows with Provenance-Capturing on Data Lakes
(Hendrik Nolte, Philipp Wieder),
2022-01-01
DOI
BIBTEX
@article{2_121151 author = {Hendrik Nolte and Philipp Wieder} doi = {10.1162/dint_a_00141} grolink = {https://resolver.sub.uni-goettingen.de/purl?gro-2/121151} title = {Realising Data-Centric Scientific Workflows with Provenance-Capturing on Data Lakes} year = {2022} month = {01} }
- Toward data lakes as central building blocks for data management and analysis
(Philipp Wieder, Hendrik Nolte),
2022-01-01
DOI
BIBTEX
@article{2_114449 abstract = {"Data lakes are a fundamental building block for many industrial data analysis solutions and becoming increasingly popular in research. Often associated with big data use cases, data lakes are, for example, used as central data management systems of research institutions or as the core entity of machine learning pipelines. The basic underlying idea of retaining data in its native format within a data lake facilitates a large range of use cases and improves data reusability, especially when compared to the schema-on-write approach applied in data warehouses, where data is transformed prior to the actual storage to fit a predefined schema. Storing such massive amounts of raw data, however, has its very own challenges, spanning from the general data modeling, and indexing for concise querying to the integration of suitable and scalable compute capabilities. In this contribution, influential papers of the last decade have been selected to provide a comprehensive overview of developments and obtained results. The papers are analyzed with regard to the applicability of their input to data lakes that serve as central data management systems of research institutions. To achieve this, contributions to data lake architectures, metadata models, data provenance, workflow support, and FAIR principles are investigated. Last, but not least, these capabilities are mapped onto the requirements of two common research personae to identify open challenges. With that, potential research topics are determined, which have to be tackled toward the applicability of data lakes as central building blocks for research data management."} author = {Philipp Wieder and Hendrik Nolte} doi = {10.3389/fdata.2022.945720} grolink = {https://resolver.sub.uni-goettingen.de/purl?gro-2/114449} title = {Toward data lakes as central building blocks for data management and analysis} year = {2022} month = {01} }
2022 🔗
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|---|---|
Entwicklung einer Provenance aware ad-hoc Schnittstelle für einen Data Lake | Prof. Julian Kunkel | BSc, MSc |
Semantische Klassifizierung von Metadatenattributen in einem Data Lake durch maschinelles Lernen | Prof. Julian Kunkel | BSc, MSc |
Governance für einen Data Lake | Prof. Julian Kunkel | BSc, MSc |
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Dr. Jack Ogaja
- Email:
- jack.ogaja@gwdg.de
- Tel.:
- 0551 3930118
Kompetenzen:
- HLRN/NHR
- Flüssigkeitsdynamik
- Wissenschaft des Erdsystems
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|---|---|
Parallelisierung von iterativen Optimierungsalgorithmen für die Bildverarbeitung mit MPI | Prof. Julian Kunkel | BSc, MSc |
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects
Dr. Martin Leandro Paleico
- Email:
- martin.paleico@gwdg.de
Martin Leandro Paleico betreut verschiedene Aspekte des Bioinformatik-Angebots der GWDG. Dr. Paleico studierte Chemie an der Universität von Buenos Aires und promovierte 2021 in Computational and Theporetical Chemistry an der Universität Göttingen mit dem Titel „Neural Network Potential Simulations of Copper Supported on Zinc Oxide Surfaces“. Seine Interessen liegen in Chemie, Biologie, Programmierung, maschinellem Lernen und Systemadministration.
Kompetenzen:
- Maschinelles Lernen
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Lars Quentin
- Email:
- lars.quentin@gwdg.de
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Stina Riegelmann
- Email:
- stina.riegelmann@gwdg.de
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Jonas Adrian Rieling
- Email:
- jonas.rieling@gwdg.de
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Steffen Rörtgen
- Email:
- steffen.roertgen@gwdg.de
- Tel.:
- 0551 3930262
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Bernd Schlör
- Email:
- bernd.schloer@gwdg.de
- Tel.:
- 0551 3930279
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects
Urs Schoepflin
- Email:
- urs.schoepflin@gwdg.de
- Tel.:
- 0151 52226383
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Jonas Schrewe
- Email:
- jonas.schrewe@gwdg.de
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Pavan Kumar Siligam
- Email:
- pavan.siligam@gwdg.de
Pavan Siligam erlangte 2012 seinen Masterabschluss in „Integrated Climate System Sciences“ an der Universität Hamburg mit seiner Arbeit mit dem Titel „Algorithm to detect lead in seaice using CRYOSAT-2 Level-1B data“. Am DKRZ arbeitete er im HDCP2-Projekt mit Schwerpunkt auf der Nachbearbeitung der Modellausgabedaten und auch mit der Benutzerunterstützung. Er arbeitet derzeit an der Entwicklung von EsiWACE-ESDM und ist außerdem an der Benutzerunterstützung der Earth System Science Community innerhalb des HLRN beteiligt.
Kompetenzen:
- HLRN/NHR
- Erdsystemwissenschaft
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Dorothea Sommer
- Email:
- dorothea.sommer@gwdg.de
Dorothea Sommer studierte Informatik an der Universität Göttingen und Computational Neuroscience am Bernstein Center for Computational Neuroscience in Berlin. Sie hat ihre Masterarbeit zum Thema Meta-Reinforcement Learning geschrieben und liebt es, maschinelles Lernen auf relevante reale Probleme anzuwenden. Bei der GWDG arbeitet Dorothea als Data Scientist für das Projekt Forestcare.
Kompetenzen:
- ForestCare
- Maschinelles Lernen
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Lena Steilen
- Email:
- lena.steilen@gwdg.de
- Tel.:
- 0551 3930271
Veröffentlichungen
- Enhanced Research for the Göttingen Campus
(Jens Dierkes, Timo Gnadt, Fabian Cremer, Péter Király, Christopher Menke, Oliver Wannenwetsch, Lena Steilen, Ulrike Wuttke, Wolfram Horstmann, Ramin Yahyapour),
2015-01-01
BIBTEX
@inproceedings{2_57543 author = {Jens Dierkes and Timo Gnadt and Fabian Cremer and Péter Király and Christopher Menke and Oliver Wannenwetsch and Lena Steilen and Ulrike Wuttke and Wolfram Horstmann and Ramin Yahyapour} grolink = {https://resolver.sub.uni-goettingen.de/purl?gro-2/57543} title = {Enhanced Research for the Göttingen Campus} year = {2015} month = {01} }
2015 🔗
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Trevor Khwam Tabougua
Trevor begann sein Studium in Tunesien an der Universität Gabès, wo er seinen Bachelor in Mathematik erwarb. Sein Mathematik-Studium setzte er an der Universität Göttingen in Mathematik mit Schwerpunkt Stochastik fort, dass er erfolgreich mit dem Master abschloss. Während seines Studiums arbeitete er als studentische Hilfskraft am Fraunhofer IEE und am Institut für Mathematische Stochastik (IMS) in Göttingen, wo er Erfahrungen in verschiedenen Bereichen wie Data Science, maschinelles Lernen und Python-Programmierung sammeln konnte.
Kompetenzen:
- Secure Workflow
- Maschinelles Lernen
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Timon Vogt
- Email:
- timon.vogt@gwdg.de
Herr Vogt erwarb seinen Abschluss „Master of Science“ in Angewandter Informatik an der Universität Göttingen. Im Rahmen seiner Masterarbeit „Enhancing the Visual Hull Method“ im Bereich „Computer Vision“ sammelte er maßgebliche Erfahrungen im Bereich High Performance Computing, insbesondere mit GPUs. Während seines Studiums an der Universität Göttingen war er auch als studentische Hilfskraft an der Universität beschäftigt und mit Systemadministrations- und Softwareentwicklungsaufgaben betraut. Herr Vogt verfügt über langjährige Erfahrung in der Softwareentwicklung und -administration, sowohl aus dem Studium als auch aus persönlichen Projekten.
Kompetenzen:
- HLRN/NHR
- DLR
- Administration
- Container
- Slurm
- Monitoring
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Dr. Artur Wachtel
- Email:
- artur.wachtel@gwdg.de
Artur Wachtel ist als wissenschaftlicher Mitarbeiter in der Arbeitsgruppe „Computing“ (AG C) tätig. Er unterstützt dort das Team, das den neuen DLR- Supercomputer „CARO“ betreibt. Nach dem Studium der Physik an der Universität Göttingen hat Herr Wachtel an der Universität Luxemburg in Physik über die Thermodynamik chemischer Reaktionsnetzwerke promoviert. In den folgenden Jahren hat er an den Universitäten Luxemburg und Yale in statistischer Physik und theoretischer Biophysik geforscht. Herr Wachtel hat außerdem langjährige Erfahrung in der Linux-Systemadministration und seit seiner Promotion auch Erfahrungen mit dem Rechnen auf HPC-Systemen.
Kompetenzen:
- DLR
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Prof. Dr. Philipp Wieder
Leiter der Arbeitsgruppe
Stellvertretender Leiter GWDG
- Email:
- philipp.wieder@gwdg.de
- Tel.:
- 0551 3930104
Veröffentlichungen
- Canonical Workflow for Experimental Research
(Dirk Betz, Claudia Biniossek, Christophe Blanchi, Felix Henninger, Thomas Lauer, Philipp Wieder, Peter Wittenburg, Martin Zünkeler),
2022-01-01
DOI
BIBTEX
@article{2_121152 author = {Dirk Betz and Claudia Biniossek and Christophe Blanchi and Felix Henninger and Thomas Lauer and Philipp Wieder and Peter Wittenburg and Martin Zünkeler} doi = {10.1162/dint_a_00123} grolink = {https://resolver.sub.uni-goettingen.de/purl?gro-2/121152} title = {Canonical Workflow for Experimental Research} year = {2022} month = {01} }
- Realising Data-Centric Scientific Workflows with Provenance-Capturing on Data Lakes
(Hendrik Nolte, Philipp Wieder),
2022-01-01
DOI
BIBTEX
@article{2_121151 author = {Hendrik Nolte and Philipp Wieder} doi = {10.1162/dint_a_00141} grolink = {https://resolver.sub.uni-goettingen.de/purl?gro-2/121151} title = {Realising Data-Centric Scientific Workflows with Provenance-Capturing on Data Lakes} year = {2022} month = {01} }
- Toward data lakes as central building blocks for data management and analysis
(Philipp Wieder, Hendrik Nolte),
2022-01-01
DOI
BIBTEX
@article{2_114449 abstract = {"Data lakes are a fundamental building block for many industrial data analysis solutions and becoming increasingly popular in research. Often associated with big data use cases, data lakes are, for example, used as central data management systems of research institutions or as the core entity of machine learning pipelines. The basic underlying idea of retaining data in its native format within a data lake facilitates a large range of use cases and improves data reusability, especially when compared to the schema-on-write approach applied in data warehouses, where data is transformed prior to the actual storage to fit a predefined schema. Storing such massive amounts of raw data, however, has its very own challenges, spanning from the general data modeling, and indexing for concise querying to the integration of suitable and scalable compute capabilities. In this contribution, influential papers of the last decade have been selected to provide a comprehensive overview of developments and obtained results. The papers are analyzed with regard to the applicability of their input to data lakes that serve as central data management systems of research institutions. To achieve this, contributions to data lake architectures, metadata models, data provenance, workflow support, and FAIR principles are investigated. Last, but not least, these capabilities are mapped onto the requirements of two common research personae to identify open challenges. With that, potential research topics are determined, which have to be tackled toward the applicability of data lakes as central building blocks for research data management."} author = {Philipp Wieder and Hendrik Nolte} doi = {10.3389/fdata.2022.945720} grolink = {https://resolver.sub.uni-goettingen.de/purl?gro-2/114449} title = {Toward data lakes as central building blocks for data management and analysis} year = {2022} month = {01} }
- Sekundäre Nutzung von hausärztlichen Routinedaten ist machbar – Bericht vom RADAR Projekt
(Johannes Hauswaldt, Thomas Bahls, Arne Blumentritt, Iris Demmer, Johannes Drepper, Roland Groh, Stephanie Heinemann, Wolfgang Hoffmann, Valérie Kempter, Johannes Pung, Otto Rienhoff, Falk Schlegelmilch, Philipp Wieder, Ramin Yahyapour, Eva Hummers),
2021-01-01
DOI
BIBTEX
@article{2_97749 abstract = {"Zusammenfassung Ziel der Studie „Real world“-Daten aus der ambulanten Gesundheitsversorgung sind in Deutschland nur schwer systematisch und longitudinal zu erlangen. Unsere Vision ist eine permanente Datenablage mit repräsentativen, de-identifizierten Patienten- und Versorgungsdaten, längsschnittlich, fortwährend aktualisiert und von verschiedenen Versorgern, mit der Möglichkeit zur Verknüpfung mit weiteren Daten, etwa aus Patientenbefragungen oder biologischer Forschung, zugänglich für andere Forscher. Wir berichten methodische Vorgehensweisen und Ergebnisse aus dem RADAR Projekt.Methodik Untersuchung des Rechtsrahmens, Entwicklung prototypischer technischer Abläufe und Lösungen, mit Machbarkeitsstudie zur Evaluation von technischer und inhaltlicher Funktionalität sowie Eignung für Fragestellungen der Versorgungsforschung.Ergebnisse Ab 2016 entwickelte ein interdisziplinäres Wissenschaftlerteam ein Datenschutzkonzept für Exporte von Versorgungsdaten aus elektronischen Praxisverwaltungssystemen. Eine technische und organisatorische Forschungsinfrastruktur im ambulanten Sektor wurden entwickelt und im Anwendungsfall „Orale Antikoagulation“ (OAK) umgesetzt. In 7 niedersächsischen Hausarztpraxen wurden 100 Patienten gewonnen und nach informierter Einwilligung ihre ausgewählten Behandlungsdaten, reduziert auf 40 relevante Datenfelder, über die Behandlungsdatentransfer-Schnittstelle extrahiert, unmittelbar vor Ort in identifizierende bzw. medizinische Daten getrennt und verschlüsselt zur Treuhandstelle (THS) bzw. an den Datenhalter übertragen. 75 Patienten, die die Einschlusskriterien erfüllten (mind. 1 Jahr Behandlung mit OAK), erhielten einen Lebensqualitäts-Fragebogen über die THS per Post. Von 66 Rücksendungen wurden 63 Fragebogenergebnisse mit den Behandlungsdaten in der Datenablage verknüpft.Schlussfolgerung Die rechtskonforme Machbarkeit der Gewinnung von pseudonymisierten hausärztlichen Routinedaten mit expliziter informierter Patienteneinwilligung und deren wissenschaftliche Nutzung einschließlich Re-Kontaktierung und Einbindung von Fragebogendaten konnte nachgewiesen werden. Die Schutzkonzepte Privacy by design und Datenminimierung (Artikel 25 mit Erwägungsgrund 78 DSGVO) wurden systematisch in das RADAR Projekt integriert und begründen wesentlich, dass der Machbarkeitsnachweis rechtskonformer Primärdatengewinnung und sekundärer Nutzung für Forschungszwecke gelang. Eine Nutzung hinreichend anonymisierter, aber noch sinnvoller hausärztlicher Gesundheitsdaten ohne individuelle Einwilligung ist im bestehenden Rechtsrahmen in Deutschland schwerlich umsetzbar."} author = {Johannes Hauswaldt and Thomas Bahls and Arne Blumentritt and Iris Demmer and Johannes Drepper and Roland Groh and Stephanie Heinemann and Wolfgang Hoffmann and Valérie Kempter and Johannes Pung and Otto Rienhoff and Falk Schlegelmilch and Philipp Wieder and Ramin Yahyapour and Eva Hummers} doi = {10.1055/a-1676-4020} grolink = {https://resolver.sub.uni-goettingen.de/purl?gro-2/97749} title = {Sekundäre Nutzung von hausärztlichen Routinedaten ist machbar – Bericht vom RADAR Projekt} year = {2021} month = {01} }
- Certification Schemes for Research Infrastructures
(Felix Helfer, Stefan Buddenbohm, Thomas Eckart, Philipp Wieder),
2021-01-01
BIBTEX
@misc{2_108259 abstract = {"This working paper discusses the use and importance of various certification systems for the field of modern research infrastructures. For infrastructures such as CLARIAH-DE, reliable storage, management and dissemination of research data is an essential task. The certification of various areas, such as the technical architecture used, the work processes used or the qualification level of the staff, is an established procedure to ensure compliance with a variety of standards and quality criteria and to demonstrate the quality and reliability of an infrastructure to researchers, funders and comparable consortia. The working paper conducts this discussion based on an overview of selected certification systems that are of particular importance for CLARIAH-DE, but also for other research infrastructures. In addition to formalised certifications, the paper also addresses the areas of software-specific and self-assessment-based procedures and the different roles of the actors involved."} address = {Göttingen} author = {Felix Helfer and Stefan Buddenbohm and Thomas Eckart and Philipp Wieder} grolink = {https://resolver.sub.uni-goettingen.de/purl?gro-2/108259} title = {Certification Schemes for Research Infrastructures} year = {2021} month = {01} }
- An Optimized Single Sign-On Schema for Reliable Multi -Level Security Management in Clouds
(Aytaj Badirova, Shirin Dabbaghi, Faraz Fatemi-Moghaddam, Philipp Wieder, Ramin Yahyapour),
In Proceedings of FiCloud 2021 – 8th International Conference on Future Internet of Things and Cloud,
2021-01-01
DOI
BIBTEX
@inproceedings{2_121153 author = {Aytaj Badirova and Shirin Dabbaghi and Faraz Fatemi-Moghaddam and Philipp Wieder and Ramin Yahyapour} doi = {10.1109/FiCloud49777.2021.00014} grolink = {https://resolver.sub.uni-goettingen.de/purl?gro-2/121153} journal = {Proceedings of FiCloud 2021 – 8th International Conference on Future Internet of Things and Cloud} title = {An Optimized Single Sign-On Schema for Reliable Multi -Level Security Management in Clouds} year = {2021} month = {01} }
- OCR-D kompakt: Ergebnisse und Stand der Forschung in der Förderinitiative
(Konstantin Baierer, Matthias Boenig, Elisabeth Engl, Clemens Neudecker, Reinhard Altenhöner, Alexander Geyken, Johannes Mangei, Rainer Stotzka, Andreas Dengel, Martin Jenckel, Alexander Gehrke, Frank Puppe, Stefan Weil, Robert Sachunsky, Lena K. Schiffer, Maciej Janicki, Gerhard Heyer, Florian Fink, Klaus U. Schulz, Nikolaus Weichselbaumer, Saskia Limbach, Mathias Seuret, Rui Dong, Manuel Burghardt, Vincent Christlein, Triet Ho Anh Doan, Zeki Mustafa Dogan, Jörg-Holger Panzer, Kristine Schima-Voigt, Philipp Wieder),
2020-01-01
URL
DOI
BIBTEX
@misc{2_121682 abstract = {"Bereits seit einigen Jahren werden große Anstrengungen unternommen, um die im deutschen Sprachraum erschienenen Drucke des 16.-18. Jahrhunderts zu erfassen und zu digitalisieren. Deren Volltexttransformation konzeptionell und technisch vorzubereiten, ist das übergeordnete Ziel des DFG-Projekts OCR-D, das sich mit der Weiterentwicklung von Verfahren der Optical Character Recognition befasst. Der Beitrag beschreibt den aktuellen Entwicklungsstand der OCR-D-Software und analysiert deren erste Teststellung in ausgewählten Bibliotheken."} author = {Konstantin Baierer and Matthias Boenig and Elisabeth Engl and Clemens Neudecker and Reinhard Altenhöner and Alexander Geyken and Johannes Mangei and Rainer Stotzka and Andreas Dengel and Martin Jenckel and Alexander Gehrke and Frank Puppe and Stefan Weil and Robert Sachunsky and Lena K. Schiffer and Maciej Janicki and Gerhard Heyer and Florian Fink and Klaus U. Schulz and Nikolaus Weichselbaumer and Saskia Limbach and Mathias Seuret and Rui Dong and Manuel Burghardt and Vincent Christlein and Triet Ho Anh Doan and Zeki Mustafa Dogan and Jörg-Holger Panzer and Kristine Schima-Voigt and Philipp Wieder} doi = {10.18452/21548} grolink = {https://resolver.sub.uni-goettingen.de/purl?gro-2/121682} title = {OCR-D kompakt: Ergebnisse und Stand der Forschung in der Förderinitiative} url = {https://publications.goettingen-research-online.de/handle/2/116509} year = {2020} month = {01} }
- OLA-HD – Ein OCR-D-Langzeitarchiv für historische Drucke
(Triet Ho Anh Doan, Zeki Mustafa Doğan, Jörg-Holger Panzer, Kristine Schima-Voigt, Philipp Wieder),
2020-01-01
DOI
BIBTEX
@article{2_116509 author = {Triet Ho Anh Doan and Zeki Mustafa Doğan and Jörg-Holger Panzer and Kristine Schima-Voigt and Philipp Wieder} doi = {10.18452/21548} grolink = {https://resolver.sub.uni-goettingen.de/purl?gro-2/116509} title = {OLA-HD – Ein OCR-D-Langzeitarchiv für historische Drucke} year = {2020} month = {01} }
- menoci: Lightweight Extensible Web Portal enabling FAIR Data Management for Biomedical Research Projects
(Markus Suhr, Christoph Lehmann, Christian Robert Bauer, Theresa Bender, Cornelius Knopp, Luca Freckmann, Björn Öst Hansen, Christian Henke, Georg Aschenbrandt, Lea Katharina Kühlborn, Sophia Rheinländer, Linus Weber, Bartlomiej Marzec, Marcel Hellkamp, Philipp Wieder, Harald Kusch, Ulrich Sax, Sara Yasemin Nussbeck),
2020-01-01
URL
BIBTEX
@misc{2_63412 abstract = {"Background: Biomedical research projects deal with data management requirements from multiple sources like funding agencies' guidelines, publisher policies, discipline best practices, and their own users' needs. We describe functional and quality requirements based on many years of experience implementing data management for the CRC 1002 and CRC 1190. A fully equipped data management software should improve documentation of experiments and materials, enable data storage and sharing according to the FAIR Guiding Principles while maximizing usability, information security, as well as software sustainability and reusability. Results: We introduce the modular web portal software menoci for data collection, experiment documentation, data publication, sharing, and preservation in biomedical research projects. Menoci modules are based on the Drupal content management system which enables lightweight deployment and setup, and creates the possibility to combine research data management with a customisable project home page or collaboration platform. Conclusions: Management of research data and digital research artefacts is transforming from individual researcher or groups best practices towards project- or organisation-wide service infrastructures. To enable and support this structural transformation process, a vital ecosystem of open source software tools is needed. Menoci is a contribution to this ecosystem of research data management tools that is specifically designed to support biomedical research projects."} author = {Markus Suhr and Christoph Lehmann and Christian Robert Bauer and Theresa Bender and Cornelius Knopp and Luca Freckmann and Björn Öst Hansen and Christian Henke and Georg Aschenbrandt and Lea Katharina Kühlborn and Sophia Rheinländer and Linus Weber and Bartlomiej Marzec and Marcel Hellkamp and Philipp Wieder and Harald Kusch and Ulrich Sax and Sara Yasemin Nussbeck} grolink = {https://resolver.sub.uni-goettingen.de/purl?gro-2/63412} title = {menoci: Lightweight Extensible Web Portal enabling FAIR Data Management for Biomedical Research Projects} url = {https://sfb1190.med.uni-goettingen.de/production/literature/publications/106} year = {2020} month = {01} }
- Designing and piloting a generic research architecture and workflows to unlock German primary care data for secondary use
(Thomas Bahls, Johannes Pung, Stephanie Heinemann, Johannes Hauswaldt, Iris Demmer, Arne Blumentritt, Henriette Rau, Johannes Drepper, Philipp Wieder, Roland Groh, Eva Hummers, Falk Schlegelmilch),
2020-01-01
DOI
BIBTEX
@article{2_68099 abstract = {"Medical data from family doctors are of great importance to health care researchers but seem to be locked in German practices and, thus, are underused in research. The RADAR project (Routine Anonymized Data for Advanced Health Services Research) aims at designing, implementing and piloting a generic research architecture, technical software solutions as well as procedures and workflows to unlock data from family doctor's practices. A long-term medical data repository for research taking legal requirements into account is established. Thereby, RADAR helps closing the gap between the European countries and to contribute data from primary care in Germany."} author = {Thomas Bahls and Johannes Pung and Stephanie Heinemann and Johannes Hauswaldt and Iris Demmer and Arne Blumentritt and Henriette Rau and Johannes Drepper and Philipp Wieder and Roland Groh and Eva Hummers and Falk Schlegelmilch} doi = {10.1186/s12967-020-02547-x} grolink = {https://resolver.sub.uni-goettingen.de/purl?gro-2/68099} title = {Designing and piloting a generic research architecture and workflows to unlock German primary care data for secondary use} year = {2020} month = {01} }
2022 🔗
2021 🔗
2020 🔗
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Alexander Wildschütz
- Email:
- alexander.wildschuetz@gwdg.de
- Tel.:
- 0551 3930270
Veröffentlichungen
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|
Projects

Seit Oktober 2011 ist Professor Dr. Ramin Yahyapour Geschäftsführer der GWDG, einer gemeinsamen Einrichtung der Georg-August-Universität Göttingen und der Max-Planck-Gesellschaft. Zeitgleich wurde er als ordentlicher Professor für Praktische Informatik an die Georg-August-Unversität Göttingen berufen. Zuvor wirkte er als Professor an der Technischen Universität Dortmund sowie als Direktor des IT- und Mediencenters und als CIO der Universität. Professor Yahyapour hat im Fach Elektrotechnik promoviert. Seine Forschungsschwerpunkte liegen in den Bereichen Ressourcenmanagement sowie deren Anwendung in Service-orientierten Infrastrukturen, Cloud-Computing und Datenmangement. Professor Yahyapour war und ist aktiv an zahlreichen nationalen und internationalen Forschungsprojekten beteiligt, beispielsweise als wissenschaftlicher Koordinator des Projektes SLA@SOI, gefördert durch das siebte Forschungsrahmenprogramm der Europäischen Union oder Mitglied des Exekutivkomitees des Networks of Excellence CoreGRID. Professor Yahyapour ist regelmäßig als Gutachter für Förderinstitutionen und als Berater für IT-Organisationen tätig. Des Weiteren ist er rganisator und Mitglied in Programm-Komitees verschiedener Konferenzen und Workshops sowie Gutachter für eine Vielzahl von Fachpublikationen.
Veröffentlichungen
- Recent Advances of Resource Allocation in Network Function Virtualization
(Song Yang, Fan Li, Stojan Trajanovski, Ramin Yahyapour, Xiaoming Fu),
2021-01-01
DOI
BIBTEX
@article{2_68120 abstract = {"Network Function Virtualization (NFV) has been emerging as an appealing solution that transforms complex network functions from dedicated hardware implementations to software instances running in a virtualized environment. Due to the numerous advantages such as flexibility, efficiency, scalability, short deployment cycles, and service upgrade, NFV has been widely recognized as the next-generation network service provisioning paradigm. In NFV, the requested service is implemented by a sequence of Virtual Network Functions (VNF) that can run on generic servers by leveraging the virtualization technology. These VNFs are pitched with a predefined order through which data flows traverse, and it is also known as the Service Function Chaining (SFC). In this article, we provide an overview of recent advances of resource allocation in NFV. We generalize and analyze four representative resource allocation problems, namely, (1) the VNF Placement and Traffic Routing problem, (2) VNF Placement problem, (3) Traffic Routing problem in NFV, and (4) the VNF Redeployment and Consolidation problem. After that, we study the delay calculation models and VNF protection (availability) models in NFV resource allocation, which are two important Quality of Service (QoS) parameters. Subsequently, we classify and summarize the representative work for solving the generalized problems by considering various QoS parameters (e.g., cost, delay, reliability, and energy) and different scenarios (e.g., edge cloud, online provisioning, and distributed provisioning). Finally, we conclude our article with a short discussion on the state-of-the-art and emerging topics in the related fields, and highlight areas where we expect high potential for future research."} author = {Song Yang and Fan Li and Stojan Trajanovski and Ramin Yahyapour and Xiaoming Fu} doi = {10.1109/TPDS.2020.3017001} grolink = {https://resolver.sub.uni-goettingen.de/purl?gro-2/68120} title = {Recent Advances of Resource Allocation in Network Function Virtualization} year = {2021} month = {01} }
- Sekundäre Nutzung von hausärztlichen Routinedaten ist machbar – Bericht vom RADAR Projekt
(Johannes Hauswaldt, Thomas Bahls, Arne Blumentritt, Iris Demmer, Johannes Drepper, Roland Groh, Stephanie Heinemann, Wolfgang Hoffmann, Valérie Kempter, Johannes Pung, Otto Rienhoff, Falk Schlegelmilch, Philipp Wieder, Ramin Yahyapour, Eva Hummers),
2021-01-01
DOI
BIBTEX
@article{2_97749 abstract = {"Zusammenfassung Ziel der Studie „Real world“-Daten aus der ambulanten Gesundheitsversorgung sind in Deutschland nur schwer systematisch und longitudinal zu erlangen. Unsere Vision ist eine permanente Datenablage mit repräsentativen, de-identifizierten Patienten- und Versorgungsdaten, längsschnittlich, fortwährend aktualisiert und von verschiedenen Versorgern, mit der Möglichkeit zur Verknüpfung mit weiteren Daten, etwa aus Patientenbefragungen oder biologischer Forschung, zugänglich für andere Forscher. Wir berichten methodische Vorgehensweisen und Ergebnisse aus dem RADAR Projekt.Methodik Untersuchung des Rechtsrahmens, Entwicklung prototypischer technischer Abläufe und Lösungen, mit Machbarkeitsstudie zur Evaluation von technischer und inhaltlicher Funktionalität sowie Eignung für Fragestellungen der Versorgungsforschung.Ergebnisse Ab 2016 entwickelte ein interdisziplinäres Wissenschaftlerteam ein Datenschutzkonzept für Exporte von Versorgungsdaten aus elektronischen Praxisverwaltungssystemen. Eine technische und organisatorische Forschungsinfrastruktur im ambulanten Sektor wurden entwickelt und im Anwendungsfall „Orale Antikoagulation“ (OAK) umgesetzt. In 7 niedersächsischen Hausarztpraxen wurden 100 Patienten gewonnen und nach informierter Einwilligung ihre ausgewählten Behandlungsdaten, reduziert auf 40 relevante Datenfelder, über die Behandlungsdatentransfer-Schnittstelle extrahiert, unmittelbar vor Ort in identifizierende bzw. medizinische Daten getrennt und verschlüsselt zur Treuhandstelle (THS) bzw. an den Datenhalter übertragen. 75 Patienten, die die Einschlusskriterien erfüllten (mind. 1 Jahr Behandlung mit OAK), erhielten einen Lebensqualitäts-Fragebogen über die THS per Post. Von 66 Rücksendungen wurden 63 Fragebogenergebnisse mit den Behandlungsdaten in der Datenablage verknüpft.Schlussfolgerung Die rechtskonforme Machbarkeit der Gewinnung von pseudonymisierten hausärztlichen Routinedaten mit expliziter informierter Patienteneinwilligung und deren wissenschaftliche Nutzung einschließlich Re-Kontaktierung und Einbindung von Fragebogendaten konnte nachgewiesen werden. Die Schutzkonzepte Privacy by design und Datenminimierung (Artikel 25 mit Erwägungsgrund 78 DSGVO) wurden systematisch in das RADAR Projekt integriert und begründen wesentlich, dass der Machbarkeitsnachweis rechtskonformer Primärdatengewinnung und sekundärer Nutzung für Forschungszwecke gelang. Eine Nutzung hinreichend anonymisierter, aber noch sinnvoller hausärztlicher Gesundheitsdaten ohne individuelle Einwilligung ist im bestehenden Rechtsrahmen in Deutschland schwerlich umsetzbar."} author = {Johannes Hauswaldt and Thomas Bahls and Arne Blumentritt and Iris Demmer and Johannes Drepper and Roland Groh and Stephanie Heinemann and Wolfgang Hoffmann and Valérie Kempter and Johannes Pung and Otto Rienhoff and Falk Schlegelmilch and Philipp Wieder and Ramin Yahyapour and Eva Hummers} doi = {10.1055/a-1676-4020} grolink = {https://resolver.sub.uni-goettingen.de/purl?gro-2/97749} title = {Sekundäre Nutzung von hausärztlichen Routinedaten ist machbar – Bericht vom RADAR Projekt} year = {2021} month = {01} }
- An Optimized Single Sign-On Schema for Reliable Multi -Level Security Management in Clouds
(Aytaj Badirova, Shirin Dabbaghi, Faraz Fatemi-Moghaddam, Philipp Wieder, Ramin Yahyapour),
In Proceedings of FiCloud 2021 – 8th International Conference on Future Internet of Things and Cloud,
2021-01-01
DOI
BIBTEX
@inproceedings{2_121153 author = {Aytaj Badirova and Shirin Dabbaghi and Faraz Fatemi-Moghaddam and Philipp Wieder and Ramin Yahyapour} doi = {10.1109/FiCloud49777.2021.00014} grolink = {https://resolver.sub.uni-goettingen.de/purl?gro-2/121153} journal = {Proceedings of FiCloud 2021 – 8th International Conference on Future Internet of Things and Cloud} title = {An Optimized Single Sign-On Schema for Reliable Multi -Level Security Management in Clouds} year = {2021} month = {01} }
- A two-phase virtual machine placement policy for data-intensive applications in cloud
(Samaneh Sadegh, Kamran Zamanifar, Piotr Kasprzak, Ramin Yahyapour),
2021-01-01
DOI
BIBTEX
@article{2_84902 author = {Samaneh Sadegh and Kamran Zamanifar and Piotr Kasprzak and Ramin Yahyapour} doi = {10.1016/j.jnca.2021.103025} grolink = {https://resolver.sub.uni-goettingen.de/purl?gro-2/84902} title = {A two-phase virtual machine placement policy for data-intensive applications in cloud} year = {2021} month = {01} }
- A multi-layered policy generation and management engine for semantic policy mapping in clouds
(Faraz Fatemi Moghaddam, Philipp Wieder, Ramin Yahyapour),
2019-01-01
DOI
BIBTEX
@article{2_62713 abstract = {"The long awaited cloud computing concept is a reality now due to the transformation of computer generations. However, security challenges have become the biggest obstacles for the advancement of this emerging technology. A well-established policy framework is defined in this paper to generate security policies which are compliant to requirements and capabilities. Moreover, a federated policy management schema is introduced based on the policy definition framework and a multi-level policy application to create and manage virtual clusters with identical or common security levels. The proposed model consists in the design of a well-established ontology according to security mechanisms, a procedure which classifies nodes with common policies into virtual clusters, a policy engine to enhance the process of mapping requests to a specific node as well as an associated cluster and matchmaker engine to eliminate inessential mapping processes. The suggested model has been evaluated according to performance and security parameters to prove the efficiency and reliability of this multi-layered engine in cloud computing environments during policy definition, application and mapping procedures."} author = {Faraz Fatemi Moghaddam and Philipp Wieder and Ramin Yahyapour} doi = {10.1016/j.dcan.2019.02.001} grolink = {https://resolver.sub.uni-goettingen.de/purl?gro-2/62713} title = {A multi-layered policy generation and management engine for semantic policy mapping in clouds} year = {2019} month = {01} }
- An Updateable Token-Based Schema for Authentication and Access Management in Clouds
(Tayyebe Emadinia, Faraz Fatemi Moghaddam, Philipp Wieder, Shirin Dabbaghi Varnosfaderani, Ramin Yahyapour),
In Proceedings of the 7th International Conference on Future Internet of Things and Cloud (FiCloud),
2019-01-01
DOI
BIBTEX
@inproceedings{2_63926 author = {Tayyebe Emadinia and Faraz Fatemi Moghaddam and Philipp Wieder and Shirin Dabbaghi Varnosfaderani and Ramin Yahyapour} doi = {10.1109/FiCloud.2019.00015} grolink = {https://resolver.sub.uni-goettingen.de/purl?gro-2/63926} journal = {Proceedings of the 7th International Conference on Future Internet of Things and Cloud (FiCloud)} title = {An Updateable Token-Based Schema for Authentication and Access Management in Clouds} year = {2019} month = {01} }
- Loyal Consumers or One-Time Deal Hunters: Repeat Buyer Prediction for E-Commerce
(Bo Zhao, Atsuhiro Takasu, Ramin Yahyapour, Xiaoming Fu),
In 2019 International Conference on Data Mining Workshops (ICDMW),
2019-01-01
DOI
BIBTEX
@inproceedings{2_63192 abstract = {"Merchants sometimes run big promotions (e.g., discounts or cash coupons) on particular dates (e.g., Boxing-day Sales, \"Black Friday\" or \"Double 11 (Nov 11th)\", in order to attract a large number of new buyers. Unfortunately, many of the attracted buyers are one-time deal hunters, and these promotions may have little long lasting impact on sales. To alleviate this problem, it is important for merchants to identify who can be converted into repeated buyers. By targeting on these potential loyal customers, merchants can greatly reduce the promotion cost and enhance the return on investment (ROI). It is well known that in the field of online advertising, customer targeting is extremely challenging, especially for fresh buyers. With the long-term user behavior log accumulated by Tmall.com, we get a set of merchants and their corresponding new buyers acquired during the promotion on the \"Double 11\" day. Our goal is to predict which new buyers for given merchants will become loyal customers in the future. In other words, we need to predict the probability that these new buyers would purchase items from the same merchants again within 6 months. A data set containing around 200k users is given for training, while the other of similar size for testing. We extracted as many features as possible and find the key features to train our models. We proposed merged model of different classification models and merged lightGBM model with different parameter sets. The experimental results show that our merged models can bring about great performance improvements comparing with the original models."} author = {Bo Zhao and Atsuhiro Takasu and Ramin Yahyapour and Xiaoming Fu} doi = {10.1109/ICDMW.2019.00158} grolink = {https://resolver.sub.uni-goettingen.de/purl?gro-2/63192} journal = {2019 International Conference on Data Mining Workshops (ICDMW)} title = {Loyal Consumers or One-Time Deal Hunters: Repeat Buyer Prediction for E-Commerce} year = {2019} month = {01} }
- A Flexible and Compatible Model for Supporting Assurance Level through a Central Proxy
(Shirin Dabbaghi Varnosfaderani, Piotr Kasprzak, Christof Pohl, Ramin Yahyapour),
In 2019 6th IEEE International Conference on Cyber Security and Cloud Computing (CSCloud)/ 2019 5th IEEE International Conference on Edge Computing and Scalable Cloud (EdgeCom),
2019-01-01
DOI
BIBTEX
@inproceedings{2_62711 abstract = {"Generally, methods of authentication and identification utilized in asserting users' credentials directly affect security of offered services. In a federated environment, service owners must trust external credentials and make access control decisions based on Assurance Information received from remote Identity Providers (IdPs). Communities (e.g. NIST, IETF and etc.) have tried to provide a coherent and justifiable architecture in order to evaluate Assurance Information and define Assurance Levels (AL). Expensive deployment, limited service owners' authority to define their own requirements and lack of compatibility between heterogeneous existing standards can be considered as some of the unsolved concerns that hinder developers to openly accept published works. By assessing the advantages and disadvantages of well-known models, a comprehensive, flexible and compatible solution is proposed to value and deploy assurance levels through a central entity called Proxy."} author = {Shirin Dabbaghi Varnosfaderani and Piotr Kasprzak and Christof Pohl and Ramin Yahyapour} doi = {10.1109/CSCloud/EdgeCom.2019.00018} grolink = {https://resolver.sub.uni-goettingen.de/purl?gro-2/62711} journal = {2019 6th IEEE International Conference on Cyber Security and Cloud Computing (CSCloud)/ 2019 5th IEEE International Conference on Edge Computing and Scalable Cloud (EdgeCom)} title = {A Flexible and Compatible Model for Supporting Assurance Level through a Central Proxy} year = {2019} month = {01} }
- Delay-Sensitive and Availability-Aware Virtual Network Function Scheduling for NFV
(Song Yang, Fan Li, Ramin Yahyapour, Xiaoming Fu),
2019-01-01
DOI
BIBTEX
@article{2_62710 abstract = {"Network Function Virtualization (NFV) has been emerging as an appealing solution that transforms from dedicated hardware implementations to software instances running in a virtualized environment. In NFV, the requested service is implemented by a sequence of Virtual Network Functions (VNF) that can run on generic servers by leveraging the virtualization technology. These VNFs are pitched with a predefined order, and it is also known as the Service Function Chaining (SFC). Considering that the delay and resiliency are two important Service Level Agreements (SLA) in a NFV service, in this paper, we first investigate how to quantitatively model the traversing delay of a flow in both totally ordered and partially ordered SFCs. Subsequently, we study how to calculate the VNF placement availability mathematically for both unprotected and protected SFCs. After that, we study the delay-sensitive Virtual Network Function (VNF) placement and routing problem with and without resiliency concerns. We prove that this problem is NP-hard under two cases. We subsequently propose an exact Integer Nonlinear Programming formulation and an efficient heuristic for this problem in each case. Finally, we evaluate the proposed algorithms in terms of acceptance ratio, average number of used nodes and total running time via extensive simulations."} author = {Song Yang and Fan Li and Ramin Yahyapour and Xiaoming Fu} doi = {10.1109/TSC.2019.2927339} grolink = {https://resolver.sub.uni-goettingen.de/purl?gro-2/62710} title = {Delay-Sensitive and Availability-Aware Virtual Network Function Scheduling for NFV} year = {2019} month = {01} }
- Cloud Security Distributary Set (CSDS): A Policy-Based Framework to Define Multi-Level Security Structure in Clouds
(Faraz Fatemi Moghaddam, Philipp Wieder, Süleyman Berk Çemberci, Ramin Yahyapour),
In COINS '19 Proceedings of the International Conference on Omni-Layer Intelligent Systems,
2019-01-01
DOI
BIBTEX
@inproceedings{2_62712 abstract = {"Security challenges are the most important obstacles for the advancement of IT-based on-demand services and cloud computing as an emerging technology. In this paper, a structural policy management engine has been introduced to enhance the reliability of managing different policies in clouds and to provide standard as well as dedicated security levels (rings) based on the capabilities of the cloud provider and the requirements of cloud customers. Cloud security ontology (CSON) is an object-oriented framework defined to manage and enable appropriate communication between the potential security terms of cloud service providers. CSON uses two super classes to establish appropriate mapping between the requirements of cloud customers and the capabilities of the service provider."} author = {Faraz Fatemi Moghaddam and Philipp Wieder and Süleyman Berk Çemberci and Ramin Yahyapour} doi = {10.1145/3312614.3312633} grolink = {https://resolver.sub.uni-goettingen.de/purl?gro-2/62712} journal = {COINS '19 Proceedings of the International Conference on Omni-Layer Intelligent Systems} title = {Cloud Security Distributary Set (CSDS): A Policy-Based Framework to Define Multi-Level Security Structure in Clouds} year = {2019} month = {01} }
2021 🔗
2019 🔗
Offene Themen für Arbeiten und Projekte
Thema | Professor | Typ |
---|
Aktuell betreute Arbeiten und Projekte
Thema | Student*in | Professor | Typ |
---|