Open Access
Refine
Document Type
- Article (156) (remove)
Keywords
- virtual reality (4)
- Business Process Management Systems (2)
- Fuzzy Logic (2)
- software engineering (2)
- visualization (2)
- Assignment Automation (1)
- Augmented Reality (1)
- Business Process Modeling Notation (1)
- Git (1)
- Indusrie 4.0 (1)
Institute
Identification and quantitative segmentation of individual blood vessels in mice visualized with preclinical imaging techniques is a tedious, manual or semiautomated task that can require weeks of reviewing hundreds of levels of individual data sets. Preclinical imaging, such as micro-magnetic resonance imaging (μMRI) can produce tomographic datasets of murine vasculature across length scales and organs, which is of outmost importance to study tumor progression, angiogenesis, or vascular risk factors for diseases such as Alzheimer’s. Training a neural network capable of accurate segmentation results requires a sufficiently large amount of labelled data, which takes a long time to compile. Recently, several reasonably automated approaches have emerged in the preclinical context but still require significant manual input and are less accurate than the deep learning approach presented in this paper—quantified by the Dice score. In this work, the implementation of a shallow, three-dimensional U-Net architecture for the segmentation of vessels in murine brains is presented, which is (1) open-source, (2) can be achieved with a small dataset (in this work only 8 μMRI imaging stacks of mouse brains were available), and (3) requires only a small subset of labelled training data. The presented model is evaluated together with two post-processing methodologies using a cross-validation, which results in an average Dice score of 61.34% in its best setup. The results show, that the methodology is able to detect blood vessels faster and more reliably compared to state-of-the-art vesselness filters with an average Dice score of 43.88% for the used dataset.
In the fast-growing but also highly competitive market of battery-powered power tools, cell-pack-cooling systems are of high importance, as they guarantee safety and short charging times. A simulation model of an 18 V power tool battery pack was developed to be able to evaluate four different pack-cooling systems (two heat-conductive polymers, one phase change material, and non-convective air as reference) in an application scenario of practical relevance (the intensive use of a power tool followed by cooling down and charging steps). The simulation comprises battery models of 21700 cells that are commercially available as well as heat transfer models. The study highlights the performance of the different cooling materials and their effect on the maximum pack temperature and total charging cycle time. Key material parameters and their influence on the battery pack temperature and temperature homogeneity are discussed. Using phase change materials and heat-conductive polymers, a significantly lower maximum temperature during discharge (up to 26%) and a high shortening potential of the use/charging cycle (up to 32%) were shown. In addition to the cooling material sweep, a parameter sweep was performed, varying the external temperature and air movement. The high importance of the conditions of use on the cooling system’s performance was illustrated.
Today’s Industry 4.0 Smart Factories involve complicated and highly automated processes. Nevertheless, certain crucial activities such as machine maintenance remain that require human involvement. For such activities, many factors have to be taken into account, like worker safety or worker qualification. This adds to the complexity of selection and assignment of optimal human resources to the processes and overall coordination. Contemporary Business Process Management (BPM) Systems only provide limited facilities regarding activity resource assignment. To overcome these, this contribution pro- poses a BPM-integrated approach that applies fuzzy sets and rule processing for activity assignment. Our findings suggest that our approach has the potential for improved work distribution and cost savings for Industry 4.0 production processes. Furthermore, the scalability of the approach provides efficient performance even with a large number of concurrent activity assignment requests and can be applied to complex production scenarios with minimal effort.
Although production processes in Industry 4.0 set- tings are highly automated, many complicated tasks, such as machine maintenance, continue to be executed by human workers. While smart factories can provide these workers with some digitalization support via Augmented Reality (AR) devices, these AR tasks depend on many contextual factors, such as live data feeds from machines in view, or current work safety conditions. Although currently feasible, these localized contextual factors are mostly not well-integrated into the global production process, which can result in various problems such as suboptimal task assignment, over-exposure of workers to hazards such as noise or heat, or delays in the production process. Current Business Process Management (BPM) Systems (BPMS) were not particularly designed to consider and integrate context-aware factors during planning and execution. This paper describes the AR-Process Framework (ARPF) for extending a BPMS to support context-integrated modeling and execution of processes with AR tasks in industrial use cases. Our realization shows how the ARPF can be easily integrated with prevalent BPMS. Our evaluation findings from a simulation scenario indicate that ARPF can improve Industry 4.0 processes with regard to AR task execution quality and cost savings.
The volume of program source code created, reused, and maintained worldwide is rapidly increasing, yet code comprehension remains a limiting productivity factor. For developers and maintainers, well known common software design patterns and the abstractions they offer can help support program comprehension. However, manual pattern documentation techniques in code and code-related assets such as comments, documents, or models are not necessarily consistent or dependable and are cost-prohibitive. To address this situation, we propose the Hybrid Design Pattern Detection (HyDPD), a generalized approach for detecting patterns that is programming-language-agnostic and combines graph analysis (GA) and Machine Learning (ML) to automate the detection of design patterns via source code analysis. Our realization demonstrates its feasibility. An evaluation compared each technique and their combination for three common patterns across a set of 75 single pattern Java and C# public sample pattern projects. The GA component was also used to detect the 23 Gang of Four design patterns across 258 sample C# and Java projects as well as in a large Java project. Performance and scalability were measured. The results show the advantages and potential of a hybrid approach for combining GA with artificial neural networks (ANN) for automated design pattern detection, providing compensating advantages such as reduced false negatives and improved F1 scores.