Archive

Archive for the ‘News’ Category

Realsearch group congratulates Dr. Laurie Williams – named acting head of NC State’s department of computer science effective December 15, 2014!

November 21st, 2014 No comments

Realsearch group wishes heartiest congratulations to Dr. Laurie Williams for being named as the acting head of NCSU Computer Science department effective 15 December, 2014.

Full news can be found here

Categories: News Tags:

Congratulations to Doctor JeeHyun Hwang!

March 15th, 2014 No comments

JeeyHyun passed his final PhD defense on March 14th, 2014

Title: Improving the Quality of Security Policies

Abstract: Systems such as web applications, database systems, and cloud services regulate users’ access control to sensitive resources based on security policies. A recent report stated that organizations managed security policies in an ad-hoc and inconsistent manner due to a lack of budget, resources, and staff. This management could cause crucial security problems such as unauthorized access to sensitive resources.

In computer systems, security policies are enforced to specify correct functioning of access control such as “who” (e.g., authorized users or processes) can perform actions under “what” conditions. Faults (i.e., misconfigurations) in security policies could result in tragic consequences, such as disallowing an authorized user to access her/his resources and allowing malicious users to access critical resources.

Policy authors may follow common patterns in specifying and maintaining security policies. Policy authors reuse common patterns and reduce mistakes. Violations of those patterns are candidates for inspection to determine whether these violations expose faults. Moreover, to improve the quality of security policies in terms of policy correctness, policy authors must conduct rigorous testing and verification during testing and maintenance phases of software development process. However, manual test-input generation and verification is an error-prone, time-consuming, and tedious task.

In this dissertation, we propose approaches that help improve the quality of security policies automatically. Our research goal is to assist policy authors to improve the quality of security policies by providing automated pattern-mining and testing techniques that help detect faults efficiently. This dissertation is comprised of three research projects where each project focuses on a specific software engineering task. The three research projects are as follows:

Pattern Mining. We present an approach to mine patterns characterizing correlations of attributes in security policies from security policies of open source software products. Our approach applies data mining techniques on policy evolution and specification data of those security policies to identify common patterns, which represent likely usage of security policies. Our approach uses mined patterns as policy specification rules and detect faults in security policies under analysis as deviations from the mined patterns..

Automated Test Generation. We present a systematic structural testing approach. Our approach is based on the concept of policy coverage, which helps test a policy’s structural entities (i.e., rules, predicates, and clauses) to check whether each entity is specified correctly. Our approach analyzes security policies under test and generates test cases automatically to achieve high structural coverage. These test cases can achieve higher fault-detection capability (i.e., detecting more injected faults).

Automated Test Selection for Regression Testing. We present a safe-test-selection approach for regression testing of security policies. Among given initial test cases in access control systems under test, our approach selects and executes only test cases that could expose different policy behaviors across multiple versions of security policies. Our approach helps detect unexpected policy behaviors (i.e., regression faults) caused by policy changes efficiently.

Categories: News Tags:

Rahul Pandita Passes His Oral Preliminary Exam on December 5th, 2013!

December 16th, 2013 No comments

Congratulations to Rahul for passing his second major examination for his PhD.  Just one left!

Title:  Inferring Semantic Information from Natural Language Specifications

Abstract:

Code-level specifications play an important role in software engineering. In addition to guiding the development process by outlining what/how to reuse, specifications also help in verification process by allowing quality assurance practitioners to test the expected outcome. One of the valuable source of such specifications are the Natural language API documents. However, sometimes humans often overlook these documents and build software systems that are inconsistent with specifications described in those documents. While there are tools and frameworks available to assist humans to build/reuse quality software, these tools are not designed to work on specifications in natural language. To address this issue, I present a Natural Language Processing (NLP) framework to automate the task of inferring semantic information from natural language software artifacts to bridge the disconnect between the inputs required by software engineering tools/frameworks and the specifications described in natural language.

Categories: News Tags:

Pat Francis Passes His 890 Exam

August 31st, 2013 No comments
Congratulations to Pat Francis for passing his exam on August 30, 2013!
Title: Determining “Grim Reaper” Policies to Prevent Languishing Bugs

Abstract: Long-lived software products commonly have a large number of reported defects, some of which may not be fixed for a lengthy period of time, if ever. These so-called languishing bugs can incur various costs to project teams, such as wasted time in release planning and in defect analysis and inspection. They also result in an unrealistic view of the number of defects still to be fixed at a given time. The goal of this work is to help software practitioners mitigate their costs from languishing bugs by providing a technique to predict and pre-emptively close them. We analyze defect fix times from an ABB program and the Apache HTTP server, and find that both contain a substantial number of languishing bugs. We also find that these languishing bugs are not sufficiently explained by defect severity: both high and low severity defects languish. Additionally, we train decision tree classification models to predict whether a given defect will be fixed within a desired time period. We propose that an organization could use such a model to form a “grim reaper” policy, whereby defects that are predicted to become languishing will be pre-emptively closed. However, initial results are mixed, with models for the ABB program achieving F-scores of 63-95%, while the Apache program has F-scores of 21-59%.

Categories: News Tags:

Pat Morrison Passes His 890 Written Exam

April 20th, 2013 No comments

Congratulations to Pat for passing his 890 Written Exam on April 19th, 2013

Title: Proposing Regulatory-Driven Automated Test Suites
Abstract In regulated domains such as finance and health
care, failure to comply with regulation can lead to financial,
civil and criminal penalties. While systems vary from
organization to organization, the same regulations apply for all
systems. As a result, efficiencies could be gained if the
commonalities between systems could be captured in public,
shared, test suites for regulations. We propose the use of
Behavior-Driven-Development (BDD) technology to create
these test suites. With BDD, desired system behavior with
respect to regulatory requirements can be captured as
constrained natural language ‘scenarios’. The scenarios can
then be automated through system-specific test drivers. The
goal of this research is to enable organizations to compare their
systems to regulation in a repeatable and traceable way
through the use of BDD. To evaluate our approach, we
developed seven scenarios based on the HITECH Act
Meaningful Use (MU) regulations for healthcare. We then
created system-specific code for three open-source electronic
health record systems. We found that it was possible to
create scenarios and system-specific code supporting scenario
execution on three systems, that iTrust can be shown to be noncompliant,
and that emergency access procedures are not
defined clearly enough by the regulation to determine
compliance or non-compliance.

Categories: News Tags:

Maria Riaz Pass the 890 Preliminary Form

April 16th, 2013 No comments

Congratulations to Maria on passing her first exam on April 15, 2013!

Topic: On the Design of Empirical Studies to Evaluate the Use of Software Patterns: A Systematic Mapping Study

Abstract:
Software patterns are created with the goal of capturing expert knowledge so it can be efficiently communicated and effectively utilized by the software development community. However, in practice, patterns may or may not achieve these goals. Empirical studies of the use of software patterns can help estimate how well these goals have been met. The objective of this paper is to help researchers design high quality empirical studies that evaluate software patterns by compiling and analyzing: (1) evaluation criteria and corresponding observable measures used to assess the efficiency and effectiveness of software design patterns in imparting expert knowledge; and (2) study-design attributes that increase or decrease this observed effectiveness. We have selected and analyzed 28 empirical studies reported in 25 separate papers. We have extracted study-design attributes related to participants’ demographics, pattern presentation, and problem presentation that may affect study outcomes. We also identified 10 evaluation criteria with 26 corresponding observable measures that are used to evaluate participants’ results in these empirical studies. By synthesizing reported observations, we have identified common issues and higher-order themes related to empirical evaluation of software patterns. We observed that capturing data on participants’ cognitive load provides insight into the findings of corresponding studies. Minimizing threats to validity at each step during study design is key to a successful design and execution of empirical studies.

Categories: News Tags:

John Slankas Pass His Oral Preliminary Exam

January 12th, 2013 No comments

Congratulations to John for passing his Oral Preliminary Exam on January 10th, 2013.  One more exam to go ….

Implementing Database Access Control Policy from Unconstrained Natural Language Text

Abstract
Although software can and does implement access control at the application layer, failure to enforce data access controls at the data layer often allows uncontrolled data access when individuals bypass application controls. The goal of this research is to improve security and compliance by ensuring access controls rules explicitly and implicitly defined within unconstrained natural language texts are appropriately enforced within a system’s relational database. Access control implemented in both the application and data layers strongly supports a defense in depth strategy. We propose a tool-based process to 1) parse existing, unaltered natural language documents; 2) classify whether or not a statement implies access control and whether or not the statement implies database design; and, as appropriate, 3) extract policy elements; 4) extract database design; 5) map data objects found in the text to a database schema; and 6) automatically generate the necessary SQL commands to enable the database to enforce access control. Our initial studies of the first three steps indicate that we can effectively identify access control sentences and extract the relevant policy elements.

Categories: News Tags:

Congratulations to Dr. Benjamin Smith

June 22nd, 2012 No comments

Congratulations to Ben Smith for successfully defending his thesis!

Empirically Developing a Software Security Test Pattern Catalog Using a Grounded Theory Approach

Evolving adaptable security assessment techniques is important since the landscape of security threats is constantly changing and expanding. We adapt the notion of a software design pattern as proposed by Gamma et al. to the domain of black box security testing. A software security test pattern is a description of a generalized test case that could be used to reveal a recurring vulnerability type, that is described such that the test case can be instantiated a million times over, without ever doing it the same way twice. Using grounded theory to develop patterns allows researchers to reuse observations and reasoning made while developing patterns to evolve patterns or develop new patterns. The goal of this research is to help software testers adapt to the evolving demands of security testing by proposing and evaluating an empirical process for developing software security test patterns using a grounded theory approach.

In this dissertation, we propose a process for empirically developing software security test patterns using a grounded theory based approach. The input to this process is a set of software security vulnerabilities, and the output of this process is a security test pattern catalog. The process involves first creating a test case that reveals a recurring vulnerability type. Then, the process involves using a grounded theory based analysis to abstract these test cases into general test templates. Next, we add to each of the general test templates (1) empirically developed keywords that signal a potential use of the test template; and (2) an example of applying a natural language artifact to create a security test case. The initial software security test pattern catalog created by applying this process is a set of test patterns each of which consists of keywords, a test template, and an example.

In this dissertation, we produce 11 software security test patterns by using our empirical process on the CWE/SANS Top 25 Most Dangerous Software Errors [24], a list of generic vulnerability descriptions that CWE/SANS prioritized and ranked based on input from security experts. We first use the generic vulnerability descriptions from the Top 25 to produce 24 specific test templates that reveal these vulnerabilities. We then use 16 vulnerabilities that CWE lists as being “On the Cusp” of the Top 25 to produce two additional specific test cases. Using a grounded theory based approach, we then abstracted these 26 specific test templates into 11 general test templates by grouping specific test templates together based on similarities in their procedure and expected results templates. We then add an example of applying the general test template to a natural language artifact to each test template. We also add a list of keywords to each general test template that, when found in a natural language artifact, suggest the need for the application of the general test template to a system under test. The general test template plus the example natural language artifact plus the list of keywords make up a software security test pattern. After their development, software testers can apply these 11 patterns to a software engineering natural language artifact to produce a black box security test plan.

To demonstrate that we can use patterns produced using our methodology to develop a black box security test plan, we applied the 11 patterns using a public requirements specification for electronic health records (EHR) systems to produce a test plan consisting of 117 tests. To demonstrate that we can use a black box test plan we created using our patterns to reveal vulnerabilities, we then executed these 117 tests on three open source EHR systems. We find that 65 out of 351 (18.5%) of our test executions found a specific security exploit in the three EHR systems. Given each vulnerability’s potential damage to the software organization, and the sparse distribution of vulnerabilities in a software system, we find that providing a technique to discover commonly occurring vulnerabilities to software testers with no expertise in security is a valuable contribution.

To demonstrate that we can use patterns produced using our methodology to develop a black box test plan using requirements specifications from more than one source, we apply our security test patterns using the requirements specification from a commercial, proprietary system to form a test plan of 125 tests. To demonstrate that we can use the tests created for this commercial, proprietary system to reveal vulnerabilities, we executed these tests on the system. We find that 11 of our 125 (9.0%) of our text cases revealed security vulnerabilities in a commercial, proprietary software product. The fact that the we used a software security test pattern catalog to develop a test plan in two different domains indicates that the software security test pattern catalog we produced is not specific to a certain domain or system.

To investigate whether patterns developed using our methodology reveal vulnerabilities that other software security assessment techniques do not, we compared the vulnerabilities we discovered (test failures) in one EHR system to the vulnerabilities discovered by two proprietary software security assessment tools. We compared our results to IBM’s Rational AppScan, the market leader in automated penetration testing tools. We also compared our results to Fortify 360, a market leader in automated static analysis for security. We found that the patterns developed using our methodology reveal different types of vulnerabilities than those revealed by automated penetration testing and static analysis. The tools only identify code-level security problems with software systems, whereas tests created using our catalog reveal vulnerabilities introduced by design decisions as well as code-level problems. We determined that these tools might be more suited for detecting code-level security problems than tests created using our catalog because of the effort involved in using our catalog. However, tests developed using our pattern catalog can help software testers identify vulnerabilities introduced by design decisions even if those testers have no expertise in security. We concluded that this investigation provides more evidence that there is “no silver bullet” in software security testing, and that no single software security assessment technique can find every vulnerability in a system.

To investigate whether software testers who are not experts in security can use patterns developed using our methodology to create software security test plans that resemble test plans that experts would develop, we conducted a user study of 47 “novice” (with respect to security) students at North Carolina State University. These novices used six requirements statements from the previously mentioned public requirements specification for EHR systems to each produce individual security test plans. Separately, we created a panel of experts in software security from students who are studying software security in their research at North Carolina State. We required this panel to form a consensus about a software security test plan for EHR systems using the same six requirements that the novices used. We found that software testers who are not security experts create software security test plans that resemble test plans developed by experts.

In addition to these findings, this dissertation provides the following contributions:

We provide the concept of using a software security test pattern catalog to generate a black box software security test plan that reveals vulnerabilities in software systems.

We provide the first generalized approach to produce security test pattern using a grounded theory based methodology based in vulnerability descriptions.

We provide a software security test pattern catalog of 11 patterns based on a public list of frequently occurring software security vulnerabilities.

We provide a tool that implements the process of applying security test patterns to natural language artifacts.

As attackers’ processes grow more sophisticated and complex, security assessment methodologies must grow and mature in tandem. The findings in this dissertation provide evidence that researchers can use a grounded theory based process to produce security test patterns based on descriptions of vulnerabilities, and that these patterns are effective in discovering vulnerabilities in two domains.

Categories: News Tags:

NCSU Hosts a Science of Security Lablet

March 21st, 2012 No comments

From the NCSU Press Release:

North Carolina State University, the University of Illinois at Urbana-Champaign and Carnegie Mellon University are each receiving an initial $2.5 million in grant funds from the U.S. National Security Agency (NSA) to stimulate the creation of a more scientific basis for the design and analysis of trusted systems.

The co-principal investigators for the NC State Science of Security Lablet are Dr. Laurie Williams, professor of computer science, and Dr. Michael Rappa, director of the Institute of Advanced Analytics and professor of computer science.

It is widely understood that critical cyber systems must inspire trust and confidence, protect the privacy and integrity of data resources, and perform reliably. To tackle the ongoing challenges of securing tomorrow’s systems, the NSA concluded that a collaborative community of researchers from government, industry and academia is a must.

To that end, the NSA grant has seeded academic “lablets” focused on the development of a Science of Security (SoS) and a broad, self-sustaining community effort to advance it. A major goal is the creation of a unified body of knowledge and analytics methods and tools that can serve as the basis of a trust engineering discipline, curriculum, and rigorous design methodologies. The results of SoS lablet research are to be extensively documented and widely distributed through the use of a new, network-based collaboration environment. The intention is for that environment to be the primary resource for learning about ongoing work in security science, and to be a place to participate with others in advancing the state of the art.

The NC State lablet, which will be housed in the Institute for Next Generation IT Systems (ITng), will contribute broadly to the development of Security Science while leveraging NC State’s expertise and experience in analytics, including the extensive expertise available in the NC State Institute of Advanced Analytics.

“The security fortification technique of data encryption has a sound mathematical basis, providing a predictable and quantifiable level of security based upon the strength of the encryption algorithm,” Williams says. “Conversely, the science behind other security techniques that provide vulnerability prevention, detection and fortification is either rudimentary or does not exist. As a result, the principles of designing trustworthy systems often are not rooted in science. The three SoS lablets established by the NSA will research techniques to provide this scientific basis.”

The lablet’s work will draw on several fundamental areas of computing research and on the related analytics. Some ideas from fault-tolerant computing can be adapted to the context of security. Strategies from control theory will be extended to account for the high variation and uncertainty that may be present in systems when they are under attack. Game theory and decision theory principles will be used to explore the interplay between attack and defense. Formal methods will be applied to develop formal notions of security resiliency. End-to-end system analysis will be employed to investigate resiliency of large systems against cyber attack. The lablet’s work will draw upon ideas from other areas of mathematics, statistics and engineering as well.

Established in 2007, the Institute for Advanced Analytics provides graduate education and promotes research in the emerging field of analytics. It serves as a focal point for collaboration among faculty is applied mathematics, statistics, and computer science, among other areas. Core to its mission is to prepare students for the challenging task of deriving insights from vast quantities of structured and semi-structured data. The institute’s flagship educational program is the nation’s first and pre-eminent Master of Science in Analytics (MSA) degree. The MSA is an intensive, full-time, 10-month learning experience with an innovative curriculum and 5-year track record of exceptional student outcomes.

The ITng is a research organization located within NC State’s College of Engineering. Its mission is to provide a forum for collaboration between government, industrial and university partners, faculty and students to research and implement solutions that address current IT challenges. Working at the intersection of research, practice and policy, ITng focuses on next generation information technology challenges in the domains of health and well being, educational innovation, energy and environment, and security. Other large government-sponsored projects housed in ITng are the Secure Open Systems Initiative (SOSI) sponsored by the Army Research Office and the National Collaborative for Bio-Preparedness (NCB-Prepared) sponsored by the Department of Homeland Security.

Categories: Funding, News Tags:

John Slankas Passes Written Preliminary

January 9th, 2012 No comments

Congratulations to our own John Slankas, who passed his CSC890 Written Preliminary Exam today. Way to go, John!

Title: Extracting Database RBAC from Uncontrolled Natural Language Text

Date: 1/9/12
Time: 9:00AM
Place: EBII Room 3300

Committee:
Dr. Laurie Williams (advisor)
Dr. Rada Chirkova
Dr. George Rouskas (department representative)

Abstract

Despite numerous proposed mitigation techniques, authorization issues continue to plague organizations that rely on software to appropriately control people’s access to restricted information. Although software can and does implement access control at the application layer, failure to authorize data access at the persistence layer with applications often causes these issues. The research goal is to improve security and compliance by ensuring policy and access controls defined within existing natural language texts are appropriately implemented within a system’s persistence layer. A tool-based process is proposed to 1) parse existing, unaltered natural language documents such as requirements and policy statements, 2) extract access control elements, and 3) automatically generate the necessary commands to enforce role-based access control within a relational database. To evaluate the process, 550 unaltered statements from a system’s requirement document were analyzed. The k-nearest neighbor classifier with a unique distance metric had a precision of 0.90 and a recall of 0.91, outperforming the random guess, which had a precision of 0.72 and a recall of 0.73. The process correctly identified and mapped 80% of the physical database tables within the evaluated system. The results demonstrate our process can successfully extract access control elements and established database role-based access control.

Categories: News Tags: