Friday, 17 July 2015

Javascript Cryptography Beginners guide

Hi All ,

The following would resolve the list of clarification with respective to javascript cryptography :


We mean attempts to implement security features in browsers using cryptographic algoritms implemented in whole or in part in Javascript.
You may now be asking yourself, "What about Node.js? What about non-browser Javascript?". Non-browser Javascript cryptography is perilous, but not doomed. For the rest of this document, we're referring to browser Javascript when we discuss Javascript cryptography.


The web hosts most of the world's new crypto functionality. A significant portion of that crypto has been implemented in Javascript, and is thus doomed. This is an issue worth discussing.


You have a web application. People log in to it with usernames and passwords. You'd rather they didn't send their passwords in the clear, where attackers can capture them. You could use SSL/TLS to solve this problem, but that's expensive and complicated. So instead, you create a challenge-response protocol, where the application sends Javascript to user browsers that gets them to sendHMAC-SHA1(password, nonce) to prove they know a password without ever transmitting the password.
Or, you have a different application, where users edit private notes stored on a server. You'd like to offer your users the feature of knowing that their notes can't be read by the server. So you generate an AES key for each note, send it to the user's browser to store locally, forget the key, and let the user wrap and unwrap their data.


They will both fail to secure users.


For several reasons, including the following:
  • Secure delivery of Javascript to browsers is a chicken-egg problem.
  • Browser Javascript is hostile to cryptography.
  • The "view-source" transparency of Javascript is illusory.
  • Until those problems are fixed, Javascript isn't a serious crypto research environment, and suffers for it.


If you don't trust the network to deliver a password, or, worse, don't trust the server not to keep user secrets, you can't trust them to deliver security code. The same attacker who was sniffing passwords or reading diaries before you introduce crypto is simply hijacking crypto code after you do.


There are three misconceptions embedded in that common objection, all of them grave.
First, although the "hijack the crypto code to steal secrets" attack sounds complicated, it is in fact simple. Any attacker who could swipe an unencrypted secret can, with almost total certainty, intercept and alter a web request. Intercepting requests does not require advanced computer science. Once an attacker controls the web requests, the work needed to fatally wound crypto code is trivial: the attacker need only inject another <SCRIPT> tag to steal secrets before they're encrypted.
Second, the difficulty of an attack is irrelevant. What's relevant is how tractable the attack is. Cryptography deals in problems that intractable even stipulating an attacker with as many advanced computers as there are atoms composing the planet we live on. On that scale, the difficulty of defeating a cryptosystem delivered over an insecure channel is indistinguishable from "so trivial as to be automatic". Further perspective: we live and work in an uncertain world in which any piece of software we rely on could be found vulnerable to new flaws at any time. But all those flaws require new R&D effort to discover. Relative to the difficulty of those attacks, against which the industry deploys hundreds of millions of dollars every year, the difficulties of breaking Javascript crypto remain imperceptibly different than "trivial".
Finally, the security value of a crypto measure that fails can easily fall below zero. The most obvious way that can happen is for impressive-sounding crypto terminology to convey a false sense of security. But there are worse ways; for instance, flaws in login crypto can allow attackers to log in without ever knowing a user's password, or can disclose one user's documents to another user.


You can. It's harder than it sounds, but you safely transmit Javascript crypto to a browser using SSL. The problem is, having established a secure channel with SSL, you no longer need Javascript cryptography; you have "real" cryptography. Meanwhile, the Javascript crypto code is still imperiled by other browser problems.


You can't simply send a single Javascript file over SSL/TLS. You have to send all the page contentover SSL/TLS. Otherwise, attackers will hijack the crypto code using the least-secure connection that builds the page.


In a dispriting variety of ways, among them:
  • The prevalence of content-controlled code.
  • The malleability of the Javascript runtime.
  • The lack of systems programming primitives needed to implement crypto.
  • The crushing weight of the installed base of users.
Each of these issues creates security gaps that are fatal to secure crypto. Attackers will exploit them to defeat systems that should otherwise be secure. There may be no way to address them without fixing browsers.


We mean that pages are built from multiple requests, some of them conveying Javascript directly, and some of them influencing Javascript using DOM tag attributes (such as "onmouseover").


This won't work.
Content-controlled code means you can't reason about the security of a piece of Javascript without considering every other piece of content that built the page that hosted it. A crypto routine that is completely sound by itself can be utterly insecure hosted on a page with a single, invisible DOM attribute that backdoors routines that the crypto depends on.
This isn't an abstract problem. It's an instance of "Javascript injection", better known to web developers as "cross-site scripting". Virtually every popular web application ever deployed has fallen victim to this problem, and few researchers would take the other side of a bet that most will again in the future.
Worse still, browsers cache both content and Javascript aggressively; caching is vital to web performance. Javascript crypto can't control the caching behavior of the whole browser with specificity, and for most applications it's infeasible to entirely disable caching. This means that unless you can create a "clean-room" environment for your crypto code to run in, pulling in no resource tainted by any other site resource (from layout to UX) , you can't even know what version of the content you're looking at.


We mean you can change the way the environment works at runtime. And it's not bad; it's a fantastic property of a programming environment, particularly one used "in the small" like Javascript often is. But it's a real problem for crypto.
The problem with running crypto code in Javascript is that practically any function that the crypto depends on could be overridden silently by any piece of content used to build the hosting page. Crypto security could be undone early in the process (by generating bogus random numbers, or by tampering with constants and parameters used by algorithms), or later (by spiriting key material back to an attacker), or --- in the most likely scenario --- by bypassing the crypto entirely.
There is no reliable way for any piece of Javascript code to verify its execution environment. Javascript crypto code can't ask, "am I really dealing with a random number generator, or with some facsimile of one provided by an attacker?" And it certainly can't assert "nobody is allowed to do anything with this crypto secret except in ways that I, the author, approve of". These are two properties that often are provided in other environments that use crypto, and they're impossible in Javascript.


You could. It's harder than it sounds, because you'd have to verify the entire runtime, including anything the DOM could contribute to it, but it is theoretically possible. But why would you ever do that? If you can write a runtime verifier extension, you can also do your crypto in the extension, and it'll be far safer and better.
"But", you're about to say, "I want my crypto to be flexible! I only want the bare minimum functionality in the extension!" This is a bad thing to want, because ninety-nine and five-more-nines percent of the crypto needed by web applications would be entirely served by a simple, well-specified cryptosystem: PGP.
The PGP cryptosystem is approaching two decades of continuous study. Just as all programs evolve towards a point where they can read email, and all languages contain a poorly-specified and buggy implementation of Lisp, most crypto code is at heart an inferior version of PGP. PGP sounds complicated, but there is no reason a browser-engine implementation would need to be (for instance, the web doesn't need all the keyring management, the "web of trust", or the key servers). At the same time, much of what makes PGP seem unwieldy is actually defending against specific, dangerous attacks.


Definitely not. It'd be nice if your browser could generate, store, and use its own PGP keys though.


Here's a starting point: a secure random number generator.


Virtually all cryptography depends on secure random number generators (crypto people call them CSPRNGs). In most schemes, the crypto keys themselves come from a CSPRNG. If your PRNG isn't CS, your scheme is no longer cryptographically secure; it is only as secure as the random number generator.


It's actually hard to say, because in real cryptosystems, bad RNGs are a "hair on fire" problem solved by providing a real RNG. Some RNG schemes are pencil-and-paper solveable; others are "crackable", like an old DES crypt(3) password. It depends on the degree of badness you're willing to accept. But: no SSL system would accept any degree of RNG badness.


How can you do that without SSL? And if you have SSL, why do you need Javascript crypto? Just use the SSL.


“Javascript Cryptography. It's so bad, you’ll consider making async HTTPS requests to RANDOM.ORG simply to fetch random numbers."
Imagine a system that involved your browser encrypting something, but filing away a copy of the plaintext and the key material with an unrelated third party on the Internet just for safekeeping. That's what this solution amounts to. You can't outsource random number generation in a cryptosystem; doing so outsources the security of the system.


Two big ones are secure erase (Javascript is usually garbage collected, so secrets are lurking in memory potentially long after they're needed) and functions with known timing characteristics. Real crypto libraries are carefully studied and vetted to eliminate data-dependant code paths --- ensuring that one similarly-sized bucket of bits takes as long to process as any other --- because without that vetting, attackers can extract crypto keys from timing.


That's true. But what's your point? We're not saying Javascript is a bad language. We're saying it doesn't work for crypto inside a browser.


Some of them are; crypto is perilous.
But many of them aren't, because they can deploy countermeasures that Javascript can't. For instance, a web app developer can hook up a real CSPRNG from the operating system with an extension library, or call out to constant-time compare functions.
If Python was the standard browser content programming language, browser Python crypto would also be doomed.


A secure keystore.


A way to generate and store private keys that doesn't depend on an external trust anchor.


It means, there's no way to store a key securely in Javascript that couldn't be expressed with the same fundamental degree of security by storing the key on someone else's server.


That scheme is, at best, only as secure as the server that fed you the code you used to secure the key. You might as well just store the key on that server and ask for it later. For that matter, store your documents there, and keep the moving parts out of the browser.


Check back in 10 years when the majority of people aren't running browsers from 2008.


Compare downsides: using Arial as your typeface when you really wanted FF Meta, or coughing up a private key for a crypto operation.
We're not being entirely glib. Web standards advocates care about graceful degradation, the idea that a page should at least be legible even if the browser doesn't understand some advanced tag or CSS declaration.
"Graceful degradation" in cryptography would imply that the server could reliably identify which clients it could safely communicate with, and fall back to some acceptable substitute in cases where it couldn't. The former problem is unsolved even in the academic literature. The latter recalls the chicken-egg problem of web crypto: if you have an acceptable lowest-common-denominator solution, use that instead.




We meant that you can't just look at a Javascript file and know that it's secure, even in the vanishingly unlikely event that you were a skilled cryptographer, because of all the reasons we just cited.


Nobody installs hundreds of applications every day. Nobody re-installs each application every time they run it. But that's what people are doing, without even realizing it, with web apps.
This is a big deal: it means attackers have many hundreds of opportunities to break web app crypto, where they might only have one or two opportunities to break a native application.


An attacker can exploit a flaw in a web app across tens or hundreds of thousands of users at one stroke. They can't get a hundred thousand credit card numbers on the street.


Nobody would accept any of the problems we're dredging up here in a real cryptosystem. If SSL/TLS or PGP had just a few of these problems, it would be front-page news in the trade press.


It isn't.


AES is to "secure cryptosystems" what uranium oxide pellets are to "a working nuclear reactor". Ever read the story of the radioactive boy scout? He bought an old clock with painted with radium and found a vial of radium paint inside. Using that and a strip of beryllium swiped from his high school chemistry lab, he built a radium gun that irradiated pitchblende. He was on his way to building a "working breeder reactor" before moon-suited EPA officials shut him down and turned his neighborhood into a Superfund site.
The risks in building cryptography directly out of AES and SHA routines are comparable. It is capital-H Hard to construct safe cryptosystems out of raw algorithms, which is why you generally want to use high-level constructs like PGP instead of low-level ones.


SJCL is great work, but you can't use it securely in a browser for all the reasons we've given in this document.
SJCL is also practically the only example of a trustworthy crypto library written in Javascript, and it's extremely young.
The authors of SJCL themselves say, "Unfortunately, this is not as great as in desktop applications because it is not feasible to completely protect against code injection, malicious servers and side-channel attacks." That last example is a killer: what they're really saying is, "we don't know enough about Javascript runtimes to know whether we can securely host cryptography on them". Again, that's painful-but-tolerable in a server-side application, where you can always call out to native code as a workaround. It's death to a browser.


People don't take Javascript crypto seriously because they can't get past things like "there's no secure way to key a cryptosystem" and "there's no reliably safe way to deliver the crypto code itself" and "there's practically no value to doing crypto in Javascript once you add SSL to the mix, which you have to do to deliver the code".


DETROIT --- A man who became the subject of a book called "The Radioactive Boy Scout" after trying to build a nuclear reactor in a shed as a teenager has been charged with stealing 16 smoke detectors. Police say it was a possible effort to experiment with radioactive materials.
The world works the way it works, not the way we want it to work. It's one thing to point at the flaws that make it hard to do cryptography in Javascript and propose ways to solve them; it's quite a different thing to simply wish them away, which is exactly what you do when you deploy cryptography to end-users using their browser's Javascript runtime.

Wednesday, 8 July 2015

Fixing a fake USB Flash Disk tutorial

HI All

In this post let me share how to fix a fake usb flash disk/memory card/usb drive :

To deal with the above problem the solution can be breifed as follows :
Step 1 : Identify The Real Size Of Your flash disk :
first thing you need to identify it’s speed class, it is to verify if you can write files to the advertised capacity for your flash memory card.
In order to test it you could use H2testw 1.4
Step 2 : Identifying Software To Repair Your The Real Size Of Your flash disk
You could try on chipgenius which claims that it repairs and inspects if the usb flash controller chip has the wrong VID PID information
Step 3 : Repairing Your Fake Flash disk
If the flash disk was real and not fake You could try on the following to repair the stuff :
Operating System Disk :
It involves removing the existing hard disk from a computer or laptop, booting from the operating system disk, then reformatting the memory card. It appears to be very successful. You can’t use an OEM disk provided with your computer or laptop, it must be a full Windows operating system CD or DVD.
Primary Partitioning For The Reported Flash disk :
The alternative option is to use the information provided by H2testw to build a fence. That is, create a primary partition on the flash disk slightly less then the real capacity reported by H2testw . The balance of the capacity the windows operating system sees as unallocated. You must always remember never to touch or format the additional unallocated capacity, because it is the capacity that is fake, it does not really exist! If people own Acronis Disk Director software, they will use it instead.
Other options to check were you could use testdrive from instructables
Sample test picture
Hope it helps

Monday, 6 July 2015

Security Testing Report Template

Hi All

While browsing internet i came across useful links for preparing security test report

Pen-Test Web Application Report :

Report Template
  • Introduction
    • Date carried out
    • Testing Team details
      • Name
      • Contact Nos.
      • Relevant Experience if required.
    • Network Details
      • Peer to Peer, Client-Server, Domain Model, Active Directory, integrated
      • Number of Servers and workstations
      • Operating System Details
      • Major Software Applications
      • Hardware configuration and setup
      • Interconnectivity and by what means i.e. T1, Satellite, Wide Area Network, Lease Line Dial up etc.
      • Encryption/ VPN’s utilized etc.
      • Role of the network or system
    • Scope of test
      • Constraints and limitations imposed on the team i.e. Out of scope items, hardware, IP addresses.
      • Constraints, limitations or problems encountered by the team during the actual test
      • Purpose of Test
        • Deployment of new software release etc.
        • Security assurance for the Code of Connection
        • Interconnectivity issues.
      • Type of Test
        • Compliance Test
        • Vulnerability Assessment
        • Penetration Test
      • Test Type
        • White-Box
          • The testing team has complete carte blanche access to the testing network and has been supplied with network diagrams, hardware, operating system and application details etc, prior to a test being carried out. This does not equate to a truly blind test but can speed up the process a great deal and leads to a more accurate results being obtained. The amount of prior knowledge leads to a test targeting specific operating systems, applications and network devices that reside on the network rather than spending time enumerating what could possibly be on the network. This type of test equates to a situation whereby an attacker may have complete knowledge of the internal network.
        • Black-Box
          • No prior knowledge of a company network is known. In essence an example of this is when an external web based test is to be carried out and only the details of a website URL or IP address is supplied to the testing team. It would be their role to attempt to break into the company website/ network. This would equate to an external attack carried out by a malicious hacker.
        • Grey-Box
          • The testing team would simulate an attack that could be carried out by a disgruntled, disaffected staff member. The testing team would be supplied with appropriate user level privileges and a user account and access permitted to the internal network by relaxation of specific security policies present on the network i.e. port level security.
  • Executive Summary (Brief and Non-technical)
    • OS Security issues discovered with appropriate criticality level specified
      • Exploited
        • Causes
          • Hardware failing
          • Software failing
          • Human error
      • Unable to exploit – problem area
        • Causes
          • Hardware failing
          • Software failing
          • Human error
    • Application Security issues discovered with appropriate criticality level specified
      • Exploited
      • Unable to exploit – problem area
    • Physical Security issues discovered with appropriate criticality level specified
      • Exploited
      • Unable to exploit – problem area
    • Personnel Security issues discovered with appropriate criticality level specified
      • Exploited
      • Unable to exploit – problem area
    • General Security issues discovered with appropriate criticality level specified
      • Exploited
      • Unable to exploit – problem area
  • Technical Summary
    • OS Security issues discovered
      • File System Security
        • Details of finding
          • Example: A FAT partition was found. FAT by default does not give the ability to set appropriate access control permissions to files. In addition moving files to this area removes the protection of the current ACLs applied to the file.
        • Recommendation and fix
          • Example: Format the file system to NTFS.
      • Password Policy
        • Details of finding
          • Example: LM Hashes found still being utilized on the network.
        • Recommendation and fix
          • Example: Ensure NTLM2 is enforced by means of the correct setting in Group Policy.
      • Auditing Policy
        • Details of finding
          • Example: Logon success and failure was not enabled
        • Recommendation and fix
          • Example: Amend appropriate Group Policy Objects and ensure it is tested and then applied to all relevant Organizational Units etc.
      • Patching Policy
        • Details of finding
          • Example: Several of the latest Microsoft patches were found to be missing
        • Recommendation and fix
          • Example: Ensure a rigorous patching policy is instigated after first being tested on a development LAN to ensure stability. Review the settings on the WSUS server and ensure that it is regularly updated and an appropriate update strategy is instigated for the domain.
      • Anti-virus Policy
        • Details of finding
          • Example: Several workstations were found to have out of date anti-virus software. In addition where it was found to be installed the actual product was found to be mis-configured and did not provide on-access protection.
        • Recommendation and fix
          • Example: Ensure all workstations are regularly updated and configured correctly to ensure maximum protection is afforded
      • Trust Policy
        • Details of finding
          • Example: Users from one domain were unable to access resources on another tree.
        • Recommendation and fix
          • Example: Review transitive and non-transitive trusts and ensure that all relevant trusts have been established.
    • Web Server Security
      • File System Security
        • Details of finding
        • Example: i.e. Incorrect permission on www root Recommendation and fix
        • Example: Apply more stringent permissions or remove various users/groups that currently have access to this area.
      • Password Policy
        • Details of finding
          • Example: Areas of the website that should be Protected did not have any password mechanism enforced.
        • Recommendation and fix
          • Example: Ensure areas that require access to be limited are password protected.
      • Auditing Policy
        • Details of finding
          • Example: Web server logs were not being reviewed for illicit behaviors.
        • Recommendation and fix
          • Example: Regularly review all audit logs.
      • Patching Policy
        • Details of finding
          • Example: The latest patch was not applied to the server leaving it susceptible to a Denial of Service Attack.
        • Recommendation and fix
          • Example: Apply the latest patch after testing on a development server to ensure compatibility with installed applications and stability of the server is maintained.
      • Lockdown Policy
        • Details of finding
          • Example: The IIS lockdown tool has not been applied to the web server.
        • Recommendation and fix
          • Example: Apply the IIS lockdown tool to the server after first testing on a development server to ensure compatibility with installed applications and stability of the server is maintained.
    • Database Server Security
      • File System Security
        • Details of finding
          • Example: Loose access control permissions were found on directories containing important configuration files that govern access to the server.
        • Recommendation and fix
          • Example: Ensure stringent access control permissions are enforced.
      • Password Policy
        • Details of finding
          • Example: Clear text passwords were found stored within the database.
        • Recommendation and fix
          • Example: Ensure all passwords, if required to be stored within the database are encrypted and afforded the maximum protection possible.
      • Auditing Policy
        • Details of finding
          • Example: Reviewing the audit logs from the TNS Listener were not being carried out.
        • Recommendation and fix
          • Example: Ensure all relevant audit logs are regularly inspected. Audit logs may give you the first clue to possible attempts to brute force access into the database.
      • Patching Policy
        • Details of finding
          • Example: The latest Oracle CPU was not installed, leaving the system susceptible to multiple buffer and heap overflows and possible Denial of Service attacks.
        • Recommendation and fix
          • Example: Install the latest Oracle CPU after first testing on a development server to ensure adequate compatibility and stability.
      • Lockdown Policy
        • Details of finding
          • Example: Numerous extended stored procedures were directly accessible by the public role.
        • Recommendation and fix
          • Example: Ensure the public role is revoked from all procedures that direct access is not required or utilized.
      • Trust Policy
        • Details of finding
          • Example: Clear text Link passwords were discovered.
        • Recommendation and fix
          • Example: Ensure all Link passwords are encrypted, review the requirement to utilize these Links on a regular basis.
    • General Application Security
      • File System Security
        • Details of finding
        • Recommendation and fix
        • Password Policy Details of finding
        • Recommendation and fix
      • Auditing Policy
        • Details of finding
        • Recommendation and fix
      • Patching Policy
        • Details of finding
        • Recommendation and fix
      • Lockdown Policy
        • Details of finding
        • Recommendation and fix
      • Trust Policy
        • Details of finding
        • Recommendation and fix
    • Business Continuity Policy
      • Backup Policy
        • Details of finding
        • Recommendation and fix
      • Replacement premises provisioning
        • Details of finding
        • Recommendation and fix
      • Replacement personnel provisioning
        • Details of finding
        • Recommendation and fix
      • Replacement software provisioning
        • Details of finding
        • Recommendation and fix
      • Replacement hardware provisioning
        • Details of finding
        • Recommendation and fix
      • Replacement document provisioning
        • Details of finding
        • Recommendation and fix
  • Annexes
    • Glossary of Terms
      • Buffer Overflow
        • Normally takes the form of inputting an overly long string of characters or commands that the system cannot deal with. Some functions have a finite space available to store these characters or commands and any extra characters etc. over and above this will then start to overwrite other portions of code and in worse case scenarios will enable a remote user to gain a remote command prompt with the ability to interact directly with the local machine.
      • Denial of Service
        • This is an aimed attacks designed to deny a particular service that you could rely on to conduct your business. These are attacks designed to say overtax a web server with multiple requests which are intended to slow it down and possibly cause it to crash. Traditionally such attacks emanated from one particular source.
      • Directory Traversal
        • Basically when a user or function tries to “break” out of the normal parent directory specified for the application and traverse elsewhere within the system, possibly gaining access to sensitive files or directories in the process.
      • Social Engineering
        • Normally uses a limited range of distinct subject matter to entice users to open and run an attachment say. Usually associated with phishing/E-mail type attacks. The main themes are:
          • Sexual – Sexual ideas/pictures/websites,
          • Curiosity – Friendly themes/appealing to someone’s passion or obsession,
          • Fear – Reputable sources/virus alert,
          • Authority – Current affairs/bank e-mails/company e-mails.
      • SQL Injection etc.
        • Basically when a low privileged user interactively executes PL/SQL commands on the database server by adding additional syntax into standard arguments, which is then passed to a particular function enabling enhanced privileges.
    • Network Map/Diagram
    • Accompanying Scan Results – CD-ROM
    • Vulnerability Definitions
      • Critical
        • A vulnerability allowing remote code execution, elevation of privilege or a denial of service on an affected system.
      • Important
        • A security weakness, whose exploitation may result in the compromise of the Confidentiality, Integrity or Availability of the company’s data.
      • Information Leak
        • Insecure services and protocols are being employed by the system allowing potentially allowing unrestricted access to sensitive information i.e.:
          a. The use of the Finger and Sendmail services may allow enumeration of User IDs.
          b. Anonymous FTP and Web based services are being offered on network devices or peripherals.
          c. Disclosure of Operating System, Application version details and personal details of system administration staffs.
      • Concern
        • The current systems configuration has a risk potential to the network concerned though the ability to exploit this is mitigated by factors such as default configuration, auditing, or the difficulty level or access level required to carry out an exploit. This includes the running of network-enabled services that are not required by the current business continuity process.
      • Unknowns
        • An unknown risk is an unclear response to a test or an action whose impact can be determined as having minimal impact on the system. The test identifying this risk may or may not be repeatable. While the results do not represent a security risk per see, they should be investigated and rectified where possible. Unknowns may also be due to false positives being reported, however, do require follow up response.
    • Details of Tools Utilized.
    • Methodology Utilized.
      • Reconnaissance
        • The tester would attempt to gather as much information as possible about the selected network. Reconnaissance can take two forms i.e. active and passive. A passive attack is always the best starting point as this would normally defeat intrusion detection systems and other forms of protection etc. afforded to the network. This would usually involve trying to discover publicly available information by utilizing a web browser and visiting newsgroups etc. An active form would be more intrusive and may show up in audit logs and may take the form of an attempted DNS zone transfer or a social engineering type of attack.
      • Enumeration
        • The tester would use varied operating system fingerprinting tools to determine what hosts are alive on the network and more importantly what services and operating systems they are running. Research into these services would then be carried out to tailor the test to the discovered services.
      • Scanning
        • By use of vulnerability scanners all discovered hosts would be tested for vulnerabilities. The result would then be analyzed to determine if there any vulnerabilities that could be exploited to gain access to a target host on a network.
      • Obtaining Access
        • By use of published exploits or weaknesses found in applications, operating system and services access would then be attempted. This may be done surreptitiously or by more brute force methods. An example of this would be the use of exploit engines i.e. Metasploit or password cracking tools such as John the Ripper.
      • Maintaining Access
        • This is done by installing a backdoor into the target network to allow the tester to return as and when required. This may be by means of a rootkit, backdoor trojan or simply the addition of bogus user accounts.
      • Erasing Evidence
        • The ability to erase logs that may have detected the testing teams attempts to access the network should ideally not be possible. These logs are the first piece of evidence that may prove that a possible breach of company security has occurred and should be protected at all costs. An attempt to erase or alter these logs should prove unsuccessful to ensure that if a malicious attacker did in fact get access to the network then their every movement would be recorded.
      • See Penetration Test Framework for more detail
    •   Apart from the above security testing reports :
  • Sample Penetration Test Report by Offensive Security— An excellent report by an excellent team.
    Writing a Penetration Testing Report — Probably one of the best papers on this subject. It was written by Mansour A. Alharbi for his GIAC certification. The author starts with report development stages, then describes the report format and ends it with a sample report.
    Penetration Testing Report— Sample report by
    Penetration Test Report— Another good sample report
    Penetration Test Report— Sample OSSAR report
    penetration testing report template— Template by
Hope it helps

Tuesday, 30 June 2015

Security Notes In Summary

What Is Message Authentication

  • message authentication is concerned with:
    • protecting the integrity of a message
    • validating identity of originator
    • non-repudiation of origin (dispute resolution)
  • electronic equivalent of a signature on a message
  • an authenticatorsignature, or message authentication code (MAC) is sent along with the message
  • the MAC is generated via some algorithm which depends on both the message and some (public or private) key known only to the sender and receiver
  • the message may be of any length
  • the MAC may be of any length, but more often is some fixed size, requiring the use of some hash function to condense the message to the required size if this is not acheived by the authentication scheme
  • need to consider replay problems with message and MAC
    • require a message sequence number, timestamp or negotiated random values

 If the Authentication using Private-key Ciphers

  • if a message is being encrypted using a session key known only to the sender and receiver, then the message may also be authenticated
    • since only sender or receiver could have created it
    • any interference will corrupt the message (provided it includes sufficient redundancy to detect change)
    • but this does not provide non-repudiation since it is impossible to prove who created the message
  • message authentication may also be done using the standard modes of use of a block cipher
    • sometimes do not want to send encrypted messages
    • can use either CBC or CFB modes and send final block, since this will depend on all previous bits of the message
    • no hash function is required, since this method accepts arbitrary length input and produces a fixed output
    • usually use a fixed known IV
    • this is the approached used in Australian EFT standards AS8205
    • major disadvantage is small size of resulting MAC since 64-bits is probably too small

What is Hashing Functions

  • hashing functions are used to condense an arbitrary length message to a fixed size, usually for subsequent signature by a digital signature algorithm
  • good cryptographic hash function h should have the following properties:
    • h should destroy all homomorphic structures in the underlying public key cryptosystem (be unable to compute hash value of 2 messages combined given their individual hash values)
    • h should be computed on the entire message
    • h should be a one-way function so that messages are not disclosed by their signatures
    • it should be computationally infeasible given a message and its hash value to compute another message with the same hash value
    • should resist birthday attacks (finding any 2 messages with the same hash value, perhaps by iterating through minor permutations of 2 messages 
  • it is usually assumed that the hash function is public and not keyed
  • traditional CRCs do not satisfy the above requirements
  • length should be large enough to resist birthday attacks (64-bits is now regarded as too small, 128-512 proposed)

what is Snefru?

  • a one-way hash function designed by Ralph Merkle
  • creates 128 or 256 bit long hash values (let m be length)
  • uses an algorithm H which hashes 512-bits to m-bits, taking the first m output bits of H as the hash value
    • H is based on a reversible block cipher E operating on 512-bit blocks
    • H is the last m-bits of the output of E XOR'd with the first m-bits of the input of E
    • E is composed of several passes, each pass has 64 rounds of an S-box lookup and XOR
    • E can use 2 to 8 passes
  • overview of algorithm
    • break message into 512-m bit chunks
    • each chunk has the previous hash value appended (assuming an IV of 0)
    • H is computed on this value, giving a new hash value
    • after the last block (0 padded to size as needed) the hash value is appended to a message length value and H computed on this, the resulting value being the MAC
  • Snefru has been broken by a birthday attack by Biham and Shamir for 128-bit hashes, and possibly for 256-bit when 2 to 4 passes are used in E
  • Merkle recommends 8 passes, but this is slow

What is MD2, MD4 and MD5

  • family of one-way hash functions by Ronald Rivest
  • MD2 is the oldest, produces a 128-bit hash value, and is regarded as slower and less secure than MD4 and MD5
  • MD4 produces a 128-bit hash of the message, using bit operations on 32-bit operands for fast implementation
R L Rivest, "The MD4 Message Digest Algorithm", Advances in Cryptology - Crypto'90, Lecture Notes in Computer Science No 537, Springer-Verlag 1991, pp303-311
  • MD4 overview
    • pad message so its length is 448 mod 512
    • append a 64-bit message length value to message
    • initialise the 4-word (128-bit) buffer (A,B,C,D)
    • process the message in 16-word (512-bit) chunks, using 3 rounds of 16 bit operations each on the chunk & buffer
    • output hash value is the final buffer value
  • some progress at cryptanalysing MD4 has been made, with a small number of collisions having been found
  • MD5 was designed as a strengthened version, using four rounds, a little more complex than in MD4
  • a little progress at cryptanalysing MD5 has been made with a small number of collisions having been found
  • both MD4 and MD5 are still in use and considered secure in most practical applications
  • both are specified as Internet standards (MD4 in RFC1320, MD5 in RFC1321)

SHA (Secure Hash Algorithm)

  • SHA was designed by NIST & NSA and is the US federal standard for use with the DSA signature scheme (nb the algorithm is SHA, the standard is SHS)
  • it produces 160-bit hash values
  • SHA overview
    • pad message so its length is a multiple of 512 bits
    • initialise the 5-word (160-bit) buffer (A,B,C,D,E) to
    • (67452301,efcdab89,98badcfe,10325476,c3d2e1f0)
    •  process the message in 16-word (512-bit) chunks, using 4 rounds of 20 bit operations each on the chunk & buffer
    • output hash value is the final buffer value
  • SHA is a close relative of MD5, sharing much common design, but each having differences
  • SHA has very recently been subject to modification following NIST identification of some concerns, the exact nature of which is not public
  • current version is regarded as secure

Other Hash Functions


  • a variable length one-way hash function designed by Uni of Wollongong and recently published at Auscrypt'92
  • it processes messages in 1024-bit blocks, using an 8-word buffer and 3 to 5 rounds of 16 steps each, creating hash values of 128, 160, 192, 224, or 256 bits in length
  • uses highly non-linear 7-variable functions in each step
  • is faster than MD5
  • it not subject to MD5 type analysis, no attack is known

Using Private Key Ciphers

  • a large number of "Modes of Use" have been proposed which use a block cipher to create a hash value
  • original proposal was by Davies and Meyer
  • many other proposals
  • most have been broken using a birthday attack
  • the design of fast, secure hash functions of this form is still being studied, with many questions unresolved

What Were Digital Signature Schemes

  • public key signature schemes
  • the private-key signs (creates) signatures, and the public-key verifies signatures
  • only the owner (of the private-key) can create the digital signature, hence it can be used to verify who created a message
  • anyone knowing the public key can verify the signature (provided they are confident of the identity of the owner of the public key - the key distribution problem)
  • usually don't sign the whole message (doubling the size of information exchanged), but just a hash of the message
  • digital signatures can provide non-repudiation of message origin, since an asymmetric algorithm is used in their creation, provided suitable timestamps and redundancies are incorporated in the signature


  • RSA encryption and decryption are commutative, hence it may be used directly as a digital signature scheme
    • given an RSA scheme {(e,R), (d,p,q)}
  • to sign a message, compute:
    • S = Md(mod R)
  • to verify a signature, compute:
    • M = Se(mod R) = Me.d(mod R) = M(mod R)
  • thus know the message was signed by the owner of the public-key
  • would seem obvious that a message may be encrypted, then signed using RSA without increasing it size
    • but have blocking problem, since it is encrypted using the receivers modulus, but signed using the senders modulus (which may be smaller)
    • several approaches possible to overcome this
  • more commonly use a hash function to create a separate MDC which is then signed

El Gamal Signature Scheme

  • whilst the ElGamal encryption algorithm is not commutative, a closely related signature scheme exists
  • El Gamal Signature scheme
  • given prime p, public random number g, private (key) random number x, compute
    • y = gx(mod p)
  • public key is (y,g,p)
    • nb (g,p) may be shared by many users
    • p must be large enough so discrete log is hard
  • private key is (x)
  • to sign a message M
    • choose a random number k, GCD(k,p-1)=1
    • compute
    • a = gk(mod p)
    • use extended Euclidean (inverse) algorithm to solve
    • M = x.a + k.b (mod p-1)
    •  the signature is (a,b)k must be kept secret
    • (like ElGamal encryption is double the message size)
  • to verify a signature (a,b) confirm:
    • ya.ab(mod p) = gM(mod p)
Example of ElGamal Signature Scheme
  • given p=11, g=2
  • choose private key x=8
  • compute
    • y = gx(mod p) = 28(mod 11) = 3
  • public key is y=3,g=2,p=11)
  • to sign a message M=5
    • choose random k=9
    • confirm gcd(10,9)=1
    • compute
      • a = gk(mod p) = 29(mod 11) = 6
    • solve
      • M = x.a+k.b(mod p-1)
      • 5 = 8.6+9.b(mod 10)
      •  giving b = 3
    • signature is (a=6,b=3)
  • to verify the signature, confirm the following are correct:
    • ya.ab(mod p) = gM(mod p)
    • 36.63(mod 11) = 25(mod 11)

DSA (Digital Signature Algorithm)

  • DSA was designed by NIST & NSA and is the US federal standard signature scheme (used with SHA hash alg)
    • DSA is the algorithm, DSS is the standard
    • there was considerable reaction to its announcement!
      • debate over whether RSA should have been used
      • debate over the provision of a signature only alg
  • DSA is a variant on the ElGamal and Schnorr algorithms
  • description of DSA
    • p = 2L a prime number, where L= 512 to 1024 bits and is a multiple of 64
    • a 160 bit prime factor of p-1
    • g = h(p-1)/q where h is any number less than p-1 with h(p-1)/q(mod p)> 1
    • a number less than q
    • y = gx(mod p)
  • to sign a message M
    • generate random k, k<q
    • compute
      • r = (gk(mod p))(mod q)
      • s = k-1.SHA(M)+ x.r (mod q)
    • the signature is (r,s)
  • to verify a signature:
    • w = s-1(mod q)
    • u1= (SHA(M).w)(mod q)
    • u2= r.w(mod q)
    •  v = (gu1.yu2(mod p))(mod q)
    • if v=r then the signature is verified
  • comments on DSA
    • was originally a suggestion to use a common modulus, this would make a tempting target, discouraged
    • it is possible to do both ElGamal and RSA encryption using DSA routines, this was probably not intended :-)
    • DSA is patented with royalty free use, but this patent has been contested, situation unclear
    • Gus Simmons has found a subliminal channel in DSA, could be used to leak the private key from a library - make sure you trust your library implementer

Other Signature Schemes


  • originally a signature scheme, subsequently modified into a zero knowledge proof of identity scheme
  • based on difficulty of finding square roots mod pq
  • this algorithm is patented


  • combines ideas from ElGamal and Fiat-Shamir schemes
  • uses exponentiation mod p and mod q
  • much of the computation can be completed in a pre-computation phase before signing
  • for the same level of security, signatures are significantly smaller than with RSA
  • this algorithm is patented

What is Key Management In Security

  • all cryptographic systems have the problem of how to securely and reliably distribute the keys used
  • in many cases, failures in a secure system are due not to breaking the algorithm, but to breaking the key distribution scheme
  • ideally the distribution protocol should be formally verified, recent advances make this more achievable
  • possible key distribution techniques include:
    • physical delivery by secure courier
      • eg code-books used submarines
      • one-time pads used by diplomatic missions
      • registration name and password for computers
    • authentication key server (private key, eg Kerberos)
      • have an on-line server trusted by all clients
      • server has a unique secret key shared with each client
      • server negotiates keys on behalf of clients
    • public notary (public key, eg SPX)
      • have an off-line server trusted by all clients
      • server has a well known public key
      • server signs public key certificates for each client

What is Authentication Protocols

  • if using a key server, must use some protocol between user and server
  • this protocol should be validated, formal techniques exist to acheive this (Ban logic provers 


  • basic technique used to ensure a password is never sent in the clear
  • given a client and a server share a key
    • server sends a random challenge vector
    • client encrypts it with private key and returns this
    • server verifies response with copy of private key
  • can repeat protocol in other direction to authenticate server to client (2-way authentication)
  • in simplest form, keys are physically distributed before secure comminications is required
  • in more complex forms, keys are stored in a central trusted key server


  • original third-party key distribution protocol
R M Needham, M D Schroeder, "Using Encryption for Authentication in Large Networks of Computers", CACM, 21(12), Dec 1978, pp993-998
  • given Alice want to communicate with Bob, and have a Key Server S, protocol is:
Message 1 A -> S A, B, Na
Message 2 S -> A EKas{Na , B, Kab, EKbs{Kab, A} }
Message 3 A -> B EKbs{Kab, A}
Message 4 B -> A EKab{Nb}
Message 5 A-> B EKab{Nb-1}
nb: Na is a random value chosen by Alice, Nb random chosen by Bob
  • after this protocol runs, Alice and Bob share a secret session key Kab for secure communication
  • unfortunately this protocol includes a fatal flaw - Message3 can be subject to a replay attack with an old compromised session key by an active attacker
  • this has been corrected either by:
    • including a timestamp in messages 1 to 3, which requires synchronised clocks (by Denning & Sacco 81)
    • having A ask B for a random value Jb to be sent to S for return in EKbs{Kab, A, Jb} (by Needham & Schroeder 87)
  • many other protocols exist but care is needed

Kerberos - An Example of a Key Server

  • trusted key server system developed by MIT
  • provides centralised third-party authentication in a distributed network
  • access control may be provided for
    • each computing resource
    • in either a local or remote network (realm)
  • has a Key Distribution Centre (KDC), containing a database of:
    • principles (customers and services)
    • encryption keys
  • basic third-party authentication scheme
  • KDC provides non-corruptible authentication credentials (tickets or tokens)

Kerberos - Initial User Authentication

  • user requests an initial ticket from KDC
  • used as basis for all remote access requests

Kerberos - Request for a Remote Service

  • user requests access to a remote service
    • obtains a ticket from KDC protected with remote key
    • sends ticket with request to remote server

Kerberos - in practise

  • currently have two Kerberos versions
    • 4 : restricted to a single realm
    • 5 : allows inter-realm authentication, in beta test
  • Kerberos v5 is an Internet standard
    • specified in RFC1510, and used by many utilities
  • to use Kerberos
    • need to have a KDC on your network
    • need to have Kerberised applications running on all participating systems
  • major problem - US export restrictions
    • Kerberos cannot be directly distributed outside the US in source format (& binary versions must obscure crypto routine entry points and have no encryption)
    • else crypto libraries must be reimplemented locally

X.509 - Directory Authentication Service

  • part of CCITT X.500 directory services
  • defines framework for authentication services
  • directory may store public-key certificates
  • uses public-key cryptography and digital signatures
  • algorithms not standardised but RSA is recommended

X.509 Certificate

  • issued by a Certification Authority (CA)
  • each certificate contains:
    • version
    • serial number (unique within CA)
    • algorithm identifier (used to sign certificate)
    • issuer (CA)
    • period of validity (from - to dates)
    • subject (name of owner)
    • public-key (algorithm, parameters, key)
    • signature (of hash of all fields in certificate)
  • any user with access to CA can get any certificate from it
  • only the CA can modify a certificate

CA Hierarchy

  • CA form a hierarchy
  • each CA has certificates for clients and parent
  • each client trusts parents certificates
  • enable verification of any certificate from one CA by users of all other CAs in hierarchy
  • X<<A>> means certificate for A signed by authority X

  • A acquires B certificate following chain:
  • X<<W>>W<<V>>V<<Y>>Y<<Z>>Z<<B>>
  • B acquires A certificate following chain:
  • Z<<Y>>Y<<V>>V<<W>>W<<X>>X<<A>>

Authentication Procedures

  • X.509 includes three alternative authentication procedures
One-Way Authentication
  • 1 message ( A->B) to establish
    • identity of A and that messages is from A
    • message intended for B
    • integrity & originality of message
Two-Way Authentication
  • 2 messages (A->B, B->A) which also establishes
    • identity of B and that replay is from B
    • reply intended for A
    • integrity & originality of reply
Three-Way Authentication
  • 3 messages (A->B, B->A, A->B) which enables
    • above authentication without syncronised clocks

Security in Practice - Secure Email

  • email is one of the most widely used and regarded network services
  • currently message contents are not secure
    • may be inspected either in transit
    • or by suitably priviledged users on destination system
  • Email Privacy Enhancement Services
    • confidentiality (protection from disclosure)
    • authentication (of originator)
    • message integrity (protection from modification)
    • non-repudiation of origin
    • (protection from denial by sender)
  • can't assume real-time access to a trusted key server
  • often implement using Email Encapsulation

What is PEM

  • Privacy Enhanced Mail
  • Internet standard for security enhancements to Internet (RFC822) email
    • developed by a Working group of the IETF
    • specified in RFC1421, RFC1422, RFC1423, RFC1424
  • uses message encapsulation to add features
  • confidentiality - DES encryption in CBC mode
  • integrity - DES encrypted MIC (MD2/MD5)
  • authentication - DES/RSA encrypted MIC
  • non-repudiation - RSA encrypted MIC

PEM - Key Management

  • central key server (private-key)
    • requires access to on-line server
  • public-key certificates
    • uses X.509 Directory Service Strong Authentication to protect key certificates
    • signed by a Certification Authority (CA)
    • CAs form a hierarchy to permit cross-validation of certificates
    • CAs must be licenced by RSA Data Inc.
    • currently only licenced in US/Canada

What is PGP

  • Pretty Good Privacy
  • widely used de facto secure email standard
    • developed by Phil Zimmermann
    • available on Unix, PC, Macintosh and Amiga systems
    • free!!!!
  • confidentiality - IDEA encryption
  • integrity - RSA encrypted MIC (MD5)
  • authentication & non-repudiation - RSA encrypted MIC
  • uses grass-roots key distribution
    • trusted introducers used to validate keys
    • no certification authority hierarchy needed

PGP - In Use

  • all PGP functions are performed by a single program
  • must be integrated into existing email/news
  • each user has a keyring of known keys
    • containing their own public and private keys (protected by a password)
    • public keys given to you directly by a person
    • public keys signed by trusted introducers
  • used to sign/encrypt your messages
  • used to validate messages received

Sample PGP Message


May all your signals trap
May your references be bounded
All memory aligned
Floats to ints be rounded

Version: 2.3

PGP - Issues

  • were questions of legality, but PGP may now be legally used by anyone in the world:
    • noncommercial use in US/Canada with licenced MIT version
    • commercial use in US/Canada with Viacrypt version
    • noncommercial use outside the US is probably legal with (non US sourced) international version
    • commercial use outside the US requires an IDEA licence for the international version
  • is on-going legal battle in US over its original export between US govt and Phil Zimmermann

Security in Practice - SNMP

  • SNMP is a widely used network management protocol
  • comprises
    • management station
    • management agent with
    • its management information base (MIB)
    • linked by network management protocol (GET,SET)
  • SNMP v1 lacks any security (GET and SET open if there)
  • SNMP v2 includes security extensions for
    • message authentication (keyed MD5)
    • message secrecy (DES)
  • based on the SNMPv2 party (sender & receiver roles)
    • used for access control & key management
    • all associated information stored in a party MIB
  • assumes syncronised clocks (within a set interval)

User Authentication

(ref Davies Ch 7)
  • user authentication (identity verification)
    • convince system of your identity
    • before it can act on your behalf
  • sometimes also require that the computer verify its identity with the user
  • user authentication is based on three methods
    • what you know
    • what you have
    • what you are
  • all then involve some validation of information supplied against a table of possible values based on users claimed identity

What you Know

Passwords or Pass-phrases

  • prompt user for a login name and password
  • verify identity by checking that password is correct
  • on some (older) systems, password was stored in the clear (this is now regarded as insecure, since breakin compromises all users of the system)
  • more often use a one-way function, whose output cannot easily be used to find the input value
    • either takes a fixed sized input (eg 8 chars)
    • or based on a hash function to accept a variable sized input to create the value
  • important that passwords are selected with care to reduce risk of exhaustive search
Denning Computer (In)security Fig 2 & 3, pp111-12

One-shot Passwords

  • one problem with traditional passwords is caused by eavesdropping theit transfer over an insecure network

  • one possible solution is to use one-shot (one-time) passwords
  • these are passwords used once only
  • future values cannot be predicted from older values

  • either generate a printed list, and keep matching list on system to be accessed (cf home banking)
  • or use an algorithm based on a one-way function f (eg MD5) to generate previous values in series (eg SKey)
    • start with a secret password s, and number N
      • p_(0) = fN(s)
    • next password in series is
      • p_(1) = fN-1(s)
    • must reset password after N uses
  • generally good only for infrequent access

    What you Have

    • here verify identity based on possession of some object, often also combined with a password

    Magnetic Card, Magnetic Key

    • possess item with required code value encoded in it (eg access control cards)

    Smart Card or Calculator

    • may interact with system
    • may require information from user
    • could be used to actively calculate:
      • a time dependent password
      • a one-shot password
      • a challenge-response verification
      • public-key based verification
    Davies fig 7.7 & 7.8 pp184-84

    What you Are

    • here verify identity based on your physical characteristics or involuntary reponse patterns
    • known as biometrics
    • characteristics used include:
      • signature (usually dynamic)
      • fingerprint
      • hand geometry
      • face or body profile
      • speech
      • retina pattern
    • always have tradeoff between
      • false rejection (type I error)
      • false acceptance (type II error)
    Davis fig 7.12 p195

    [1] see Schneier p322
    [2] follow with Schneier pp330-332
    [3] follow with Schneier pp334-335
    [4] see Dieter Gollman, Thomas Beth, Frank Damm, "Authentication Services in Distributed Systems", Computers & Security, 12(8), 1993, pp753-764
    [5] see Anish Mathuria, "Automating Ban Logic", MSci(Hons), University of Wollongong, 1993

    Hope it helps by blueberry