Dependable, Adaptive, and Secure Distributed Systems14th DADS Track of the34th ACM Symposium on Applied Computing Previous years: | 13th DADS 2018 12th DADS 2017 11th DADS 2016 10th DADS 2015 9th DADS 2014 8th DADS 2013 7th DADS 2012 6th DADS 2011 5th DADS 2010 4th DADS 2009 3rd DADS 2008 2nd DADS 2007 1st DADS 2006 |
http://www.sigapp.org/sac/sac2019/ April 8 - 12, 2019 Limassol, Cyprus |
The Symposium on Applied Computing has been a primary gathering forum for applied computer scientists, computer engineers, software engineers, and application developers from around the world. SAC 2019 is sponsored by the ACM Special Interest Group on Applied Computing and the SRC Program is sponsored by Microsoft Research.
The track provides a forum for scientists and engineers in academia and industry to present and discuss their latest research findings on selected topics in dependable, adaptive and trustworthy distributed systems and services.
The track is structured in two sessions.
Details see SAC program page.
Don't Hesitate to Share! A Novel IoT Data Protection Scheme Based on BGN Cryptosystem
Subir Halder and Mauro Conti
In cloud-based Internet of Things (IoT), sharing of data with thirdparty services and other users, inherently incurs potential risk and leads to unique security and privacy concerns. Existing cryptographic solutions ensure the security of IoT data, but due to their significant computational overhead, most of them are not suitable for resource-constrained IoT devices. To address these concerns, we propose a data protection system to store encrypted IoT data in a cloud while still allowing query processing over the encrypted data. More importantly, our proposed system features a novel encrypted data sharing scheme based on Boneh-Goh-Nissim (BGN) cryptosystem, with revocation capabilities and in-situ key update. We perform exhaustive experiments on real datasets, primarily to assess the feasibility of the proposed system on resource-constrained IoT devices. We next measure the computation overhead, storage overhead and throughput. The experimental results show that our system is not only feasible, but also provides a high level of security. Furthermore, the results show that our system is 34% more computationally faster, requires 25% less storage and 15% more throughput than the best performed system in the state-of-the-art.
Securely deploying distributed computation systems on peer-to-peer networks
Kobe Vrancken, Frank Piessens and Raoul Strackx
More and more off-the-shelf processors support the dynamic construction of Trusted Execution Environments. For instance, Intel Software Guard Extensions (Intel SGX) supports the construction of so-called enclaves on modern Intel Core processors. Hence, it is interesting to design and evaluate practical security architectures that leverage this new technology. One of the possibilities of this new technology is that it enables deployment of traditional distributed applications that require a group of mutually trusting machines, on top of a group of mutually distrusting machines such as a peer-to-peer network. This paper proposes and evaluates an Intel SGX based approach to securely deploy a subset of distributed systems called distributed computation systems in a peer-to-peer fashion, with strong confidentiality and integrity guarantees and without modification of the original system. The approach is evaluated by applying it to distcc, a distributed compiler. This result of this process is a new program called p2pcc, a distributed peer-to-peer compiler. We created two different versions of p2pcc. In the first version, any process spawned on one of the untrusted peers runs in its own enclave, thus providing a very fine-grained form of isolation. Our evaluation shows that the performance cost on today’s Intel SGX implementation is too high. The second version of p2pcc groups all processes running on behalf of the same user within the same enclave, thus providing coarser isolation, but still providing strong isolation on all security boundaries. Our evaluation shows that the second approach has good performance while providing strong security guarantees even on current SGX processors. Our results provide evidence that deploying existing distributed computation systems in a peer-to-peer fashion is practical.
Adaptive information dissemination in the Bitcoin network
João Marçal, Luís Rodrigues and Miguel Matos
Distributed ledgers have received significant attention as a building block for cryptocurrencies and have proven to be also relevant in several other fields. In cryptocurrencies, this abstraction is usually implemented by grouping transactions in blocks that are then linked together to form a blockchain. Nodes need to exchange information to maintain the status of the chain but this process consumes significant network resources. Unfortunately, naively reducing the number of messages exchanged can have a negative impact in performance and correctness, as some transactions might not be included in the chain. In this paper,we study the mechanisms of information dissemination used in Bitcoin and propose a set of adaptive mechanisms that lower network resource usage. Our experimental evaluation shows that is possible to lower the bandwidth consumed by 10.2% and the number of exchanged messages in 41.5%, without any negative impact in the number of transactions committed.
Scalable Lightning Factories for Bitcoin
Alejandro Ranchal Pedrosa, Maria Potop Butucaru and Sara Tucci
Bitcoin, the most popular blockchain system, does not scale even under very optimistic assumptions. Lightning networks, a layer on top of Bitcoin, composed of one-to-one lightning channels make it scale to up to 105 Million users. Recently, Duplex Micropayment Channel factories have been proposed based on opening multiple one-to-one payment channels at once. Duplex Micropayment Channel factories rely on time-locks to update and close their channels. This mechanism yields to situation where users funds time-locking for long periods increases with the lifetime of the factory and the number of users. This makes DMC factories not applicable in reallife scenarios. In this paper, we propose the first channel factory construction, the Lightning Factory that offers a constant collateral cost, independent of the lifetime of the channel and members of the factory. We compare our proposed design with Duplex Micropayment Channel factories, obtaining better performance results by a factor of more than 3000 times in terms of the worst-case constant collateral cost incurred when malicious users use the factory. The message complexity of our factory is n where Duplex Micropayment Channel factories need n2 messages where n is the number of users. Moreover, our factory copes with an infinite number of updates while in Duplex Micropayment Channel factories the number of updates is bounded by the initial time-lock. Finally, we discuss the necessity for our Lightning Factories of BNN, a non-interactive aggregate signature cryptographic scheme, and compare it with Schnorr and ECDSA schemes used in Bitcoin and Duplex Micropayment Channels.
Failure Prediction in the Internet of Things due to Memory Exhaustion
Rafiuzzaman Mohammad, Julien Gascon-Samson, Karthik Pattabiraman and Sathish Gopalakrishnan
We present a technique to predict failures resulting from memory exhaustion in devices built for the modern Internet of Things (IoT). These devices can run general-purpose applications on the network edge for local data processing to reduce latency, bandwidth and infrastructure costs, and to address data safety and privacy concerns. Applications are, however, not optimized for all devices and could result in sudden and unexpected memory exhaustion failures because of limited available memory on those IoT devices. Proactive prediction of such failures, with sufficient lead time, allows for adaptation of the application or its safe termination. Our memory failure prediction technique for applications running on IoT devices uses k-Nearest-Neighbor (kNN) based machine learning models. We have evaluated our technique using two third-party applications and a real-world IoT simulation application on two different IoT platforms and on an Amazon EC2 t2.micro instance for both single and multitenancy use cases. Our results indicate that our technique significantly outperforms simpler threshold-based techniques: in our test applications, with 180 seconds of lead time, failures were accurately predicted with 88% recall at 74% precision for a single application failure and 76% recall at 71% precision for multitenancy failure.
Planning Workflow Executions when Using Spot Instances in the Cloud
Richard Gil Martinez, Antonia Lopes and Luís Rodrigues
When running workflows in the cloud it is appealing to use spot instances that can be acquired at a fraction of the cost of on-demand instances. Unfortunately, spot instances can be revoked at any time, creating uncertainty about task completion times, which is an impairment for workflows with timeliness requirements. While workflow scheduling has been subject to extensive research, the problem of optimally scheduling deadline-constraint workflows in the cloud while dealing with the uncertainty caused by spot instance revocations has not been fully addressed. In this paper, we plan the execution of workflows in cloud environments to minimize the monetary cost while being subject to timeliness constraints. Our approach constructs a Markov Decision Process (MDP) of the workflow execution and looks up for the optimal policy taking into account the user preferences in time and cost. The optimal solution is generated offline and actions selected on-the-fly, depending on the occurrence of failures due to instance revocations. Experimental results with a real-world scientific workflow application demonstrate that, in comparison to approaches that rely on simple heuristics to schedule tasks, our planning-based approach is able to generate reliable solutions that are cheaper and able to meet deadlines.
Quantitative Comparison of Unsupervised Anomaly Detection Algorithms for Intrusion Detection
Filipe Falcao, Tommaso Zoppi, Caio Barbosa, Anderson Santos, Baldoino Fonseca, Andrea Ceccarelli and Andrea Bondavalli
Anomaly detection algorithms aim at identifying unexpected fluctuations in the expected behavior of target indicators, and, when applied to intrusion detection, suspect attacks whenever the above deviations are observed. Through years, several of such algorithms have been proposed, evaluated experimentally, and analyzed in qualitative and quantitative surveys. However, the experimental comparison of a comprehensive set of algorithms for anomaly-based intrusion detection against a comprehensive set of attacks datasets and attack types was not investigated yet. To fill such gap, in this paper we experimentally evaluate a pool of twelve unsupervised anomaly detection algorithms on five attacks datasets. Results allow elaborating on a wide range of arguments, from the behavior of the individual algorithm to the suitability of the datasets to anomaly detection. We identify the families of algorithms that are more effective for intrusion detection, and the families that are more robust to the choice of configuration parameters. Further, we confirm experimentally that attacks with unstable and nonrepeatable behavior are more difficult to detect, and that datasets where anomalies are rare events usually result in better detection scores.
A Library for Services Transparent Replication
Paola Pereira, Cristina Meinhardt, Fernando Dotti and Odorico Mendizabal
State Machine Replication is a well-known approach to develop fault-tolerant application. Although it seems conceptually simple, building replicated state machines is not a trivial task. The developer has to be acquainted with aspects of the inner working of the specific agreement protocol to correctly develop and deploy the replicated service (and auxiliary processes – e.g. Paxos roles), instead of focusing on the specific service. In this work we propose a replication library that facilitates the development and deployment of fault-tolerant services, and provides replication transparency to service builders. This library allows to deploy a base SMR on top of which new services can be registered at runtime. A service builder focuses on service implementation and registers the service with the base SMR to enjoy the benefits of replication. Besides separating the complexity of providing a replicated infrastructure from service implementation, multiple services share the same consensus and replication infrastructure, allowing cost amortization. According to our evaluation, this approach leads to higher overall throughput compared to the separate deployment of different SMRs over the same resources.
Details see SAC poster page.
Distributed Storage System based on Permissioned Blockchain
Racin Nygaard, Hein Meling and Leander Jehl
Readily available blockchain technologies allow to improve reliability and availability of existing cloud applications. This paper presents a blockchain based distributed storage system for permissioned settings. We use a blockchain to form verifiable contracts between clients and storage providers, specifying what should be stored, and when stored data can be deleted. Our approach utilizes a lightweight proof-of-storage mechanism to verify availability of stored data. Further, we publish majority proofs on the blockchain, to incentivize storage providers to behave correctly and to identify misbehavior.
Is it Safe to Dockerize my Database Benchmark?
Martin Grambow, Jonathan Hasenburg and David Bermbach
Docker seems to be an attractive solution for cloud database benchmarking as it simplifies the setup process through pre-built images that are portable and simple to maintain. However, the usage of Docker for benchmarking is only valid if there is no effect on measurement results. Existing work has so far only focused on the performance overheads that Docker directly induces for specific applications. In this paper, we have studied indirect effects of dockerization on the results of database benchmarking. Among others, our results clearly show that containerization has a measurable and non-constant influence on measurement results and should, hence, only be used after careful analysis.
Karl M. Göschka (Main contact chair)
University of Applied Sciences Technikum Wien
Embedded Systems Institute
Hoechstaedtplatz 6
A-1200 Vienna, Austria
phone: +43 664 180 6946
fax: +43 664 188 6275
dads@dedisys.org
goeschka (at) technikum-wien dot at
Rui Oliveira
Universidade do Minho
Computer Science Department
Campus de Gualtar
4710-057 Braga, Portugal
phone: +351 253 604 452 / Internal: 4452
fax: +351 253 604 471
rco (at) di dot uminho dot pt
Peter Pietzuch
Imperial College London
Department of Computing
South Kensington Campus
180 Queen's Gate
London SW7 2AZ, United Kingdom
phone: +44 (20) 7594 8314
fax: +44 (20) 7581 8024
prp (the at sign goes here) doc (dot) ic (dot) ac (dot) uk
Giovanni Russello
University of Auckland
Department of Computer Science
Private Bag 92019
Auckland 1142, New Zealand
phone: +64 9 373 7599 ext. 86137
g dot russello at auckland dot ac dot nz
September 24, 2018 (11:59PM Pacific Time) - extended | Paper submission |
November 24, 2018 | Author notification |
December 10, 2018 | Camera-ready papers |
For general information about SAC, please visit: http://www.sigapp.org/sac/sac2019/
If you have further questions, please do not hesitate to contact us: dads@dedisys.org