Start of the VPN Wars

The use of VPNs has grown exponentially in the last few years.  Of course they have always been used particularly in the context of the internet, the virtual private network provided a simple and secure way to traverse the insecure infrastructure of the internet back to a specific point.

For people who worked internationally it was a huge bonus, being able to login in to any network and safely access your companies email and document servers just like you were sitting in the office.    In the corporate world a VPN client usually made and run by someone like IBM was installed on virtually all laptops and mobile devices.

Indeed VPN client software is still used like this by multinational companies across the world, but their use has been overtaken by the VPNs used by private individuals such as Identity Cloaker.   At first they were used for a similar purpose primarily to provide a secure and safe connection whilst using the internet.  However this began to expand in response to the growth of region locking by some of the biggest media sites on the web.

Region locking is the process of blocking and filtering access to websites based on your physical location.  The location of a browser was determined using the IP address of the client which basically meant that if you were in France you’d have a French IP address and a Canadian one in Canada etc.  This information was used to restrict access mostly by the big media sites, stopping non-US surfers from using the big American media sites like HBO, ABC and Hulu for instance.

These VPN services bypass all these issues, using such a service you could hide your real IP address behind the VPN server’s address and effectively access anything you liked.   You could sit in one country and pretend you were in another, suddenly it was possible to watch BBC News Live online from anywhere in the world not just the UK.  It worked for all the other big media sites  too including all of those which were originally US only.

Literally millions of people are using these tools every day to watch anything from HBO to Netflix irrespective of their location.  Originally because the providers had no reliable method of detecting the use of a VPN – the practice was tolerated.   However increasing pressure from the copyright holders has seen a crackdown on this practice including a particularly aggressive blocking campaign from the media giant Netflix.

How’s it being done – More in next post

 

 

Using TCP Dump Filters

TCPDump is a Unix tool, which can be used to collect data from networks and can help decipher the output in an easy to understand manner.  At first trying to make sense of any amount of data can be a real trial yet that’s where using the analyser and it’s filters is so important.

If  you leave it at the defaults, TCPdump wiil examine all records either directly on the wire or from a dump file creates by another program.  The problem is that this amount of data can be very difficult to analyse especially if you’re not that experienced.   For network personnel who are looking for specific activity maybe evidence of a malicious attack or simply trying to solve a network related issue then being able to focus on specific addresses or protocols quickly  is essential.

ball-958950_640

This is where the use of filters is so important, you can create filters to look for specific information.  Solving a DNS problem or trying to fix a network sharing issue then simply create a filter which only displays the relevant protocol.

There is already help built in to the program, for example you’ll find that most common, general filters have easily available.  So if you’re looking for ICMP messages or DNS requests then you’ll just need to apply the pre-prepared filter.  TCPdump assigns a designated name for each type of header, associated with specific protocols.  So for example ‘ip’ would be specified for a field in the IP header or datagram, ‘tcp’ for part of the TCP header or datagram and so on.

You can then reference these fields by specific protocols, by working through it’s displacement for the beginning of the header.  These don’t change so you can work out what’s the IP segment and which is the TCP header and so on.

It takes a bit of practice but you should be able to use TCPdump to select any specific datagram for example perhaps to look into an embedded protocol like ICMP.  This makes it much easier to troubleshoot specific issues.   It also allows  you to filter out data which cannot be read – for example if that data is encrypted.   Imagine trying to solve network problems or determine issues when half the data is being deliberately hidden, perhaps many users are using proxies or VPNs.  Some of these are being used very commonly even within corporate network – here’s an article discussing the use of the best vpn for Netflix which people use to switch versions (usually to the US version of Netflix).

Technical References:

Filters and proxies, J Brewer – Proxy online – Harvard Press, 2014

Automating Text Creation Using SGML

In 1995, the staff of the HTI began work on the American Verse Project, an electronic archive of American poetry. Although a few eighteenth century works will soon be comprised, the vast majority of the works are from the nineteenth and early twentieth centuries. The collection is both browseable and searchable. Users who just wish to scan the listing of available texts and read a poem can, and many do; a number of the works contained are hard to find outside of big academic libraries or are in very poor condition and don’t circulate, as well as their availability on the internet is a great boon to readers and researchers. For instance in some cases restrictions on copyright meant using an intermediary server to access, using an Irish proxy for example to access those limited to the Irish Republic.

Although a few eighteenth century works will soon be contained, nearly all the works are from the nineteenth and early twentieth centuries. The group is browsable and searchable. Users who only need to scan the listing of accessible texts and read a poem can, and many do; a number of the works contained are not easy to locate outside of big academic libraries or are in quite poor state and don’t circulate, as well as their availability on the internet is a remarkable blessing to readers and researchers. The capacity to search the set is useful for tasks as easy as locating a poem that starts with the line “Thou art not lovelier than lilacs” or as complicated as comparing examples of flower vision in early American poems in general.
The list was enlarged to include poets of special interest to American literary historians from the Department of English in Michigan. A listing of almost 400 American writers of poetry was gathered.

Working from this list, a survey of publications was made and an electronic bibliography of electronic and print variants was built. Several hundred titles from the Michigan group were assessed to decide whether they were within the range of the undertaking; texts were chosen and prioritized based on their scholarly interest as well as their physical properties (e.g., extent of deterioration and “scanability”).

Without being disbound the volumes chosen for inclusion in the American Verse Project are scanned; now, the HTI is using its Xerox Scan Manager applications for batch scan and the Xerox 620 scanner; BSCAN and a Fujitsu 3096 scanner have additionally been used. TypeReader is the software package mainly used for optical character recognition (OCR); it’s functioned very well for the acknowledgement of the older typefaces in the nineteenth century content and has an unobtrusive, user friendly proofing interface. A UNIX program accessible from XIS, ScanWorx, has been used less often; because it could be trained to recognize non-standard characters, like long s, it’s useful for the earliest volumes in the set. A program developed by Prime Recognition that uses up to five OCR engines to radically improve OCR accuracy, Prime OCR, is being assessed for potential use. The HTI gives an excellent deal of attention to precision in the digitization procedure, together with the premise that accessibility to electronic texts that are dependable is significant.

After a volume is in electronic form, automated routines are run to supply a first level of SGML markup, identifying clear text structures (lines of poetry, page breaks, paragraphs) and potential scanning malfunctions, such as left out pages. Mindful manual markup happens in the following period, using SoftQuad’s Author/Editor SGML editing program as well as the TEI’s “TEILite” DTD. The encoding staff of the HTI does equivocal markup introduced by the automatic labeling procedure is cleared up and markup also classy for automated routines or open.

After encoding, a formatted, printed replica of the text is used to evidence against the primary volume and for review of the markup by a senior encoder. All pictures discovered in the first volume suggested and are scanned in the encoded text; a picture of the title page and verso is, in addition, contained. Ultimately, complete bibliographic information, including the size of the electronic file as well as a local call number is contained in the header of the electronic text. A cataloger reviews the header as well as a record for the electronic text is made for the on-line library catalog in Michigan.

Further Information
Obtaining a French IP Address – http://www.changeipaddress.net/french-ip-address/

Retaining Logs – Legal and Policy Requirements

In most countries, there are legal and policy requirements about log retention. To comply with the leading audit standards like Sarbanes-Oxley, ISO-9000 and VISA CISP, then a corporate policy is a a necessity. One important requirement of this policy is the subject of critical system log archive retention, how these logs are stored and retained. Depending on which standard you need to compliance with will determine how things like event and SQL logs need to stored and for how long.

To meet these requirements, a corporate policy covering log file archiving and retention is essential. Many companies are adapting existing Syslog servers and storing the data on these for whatever duration is specified. Whilst others will copy the files over onto centralised file servers or share. The other main option is to move the logs on to some sort of backup disk or tape system for long term storage. This option often is useful in that it can be incorporated into a disaster recovery procedure or policy by moving the data off-site.

Whichever system is used, the base concept is centralizing logs from various systems into a single storage system. One advantage of this is that it moves the responsibility of the logs and the data they contain from the individual system owners onto a centralized system. This is much easier to manage and control, all the files can be controlled under a central policy rather than individual application requirements which often differ.

There are other benefits to the centralized storage model besides making policies easier. A practical advantage is that you have a single point to analyse for information from all a companies systems. You can use analytical tools to parse and filter information from all the logs at once.
For example using Microsoft’s free tool – Log Parser you could gather all the system start up events from all the systems in the environment.

There is another important reason that audit standards enforce the retention of system logs and that is for non-repudiation. This means that you can use the logs for proof that a transaction or process happened and cannot be reputed later. A simple example of this is the signing and transmitting of a digital message. If the message is signed then the recipient cannot deny receiving the message later, logs can be used to demonstrate this too.

Source: http://iplayerusa.org/

Introduction to Object Technologies

Object orientated technology has brought software development past procedural programming and into a world where programmes can be readily reused. This of course vastly simplifies development of applications because any programmer can leverage existing modules quickly and efficiently. OPerating systems and applications are created as multiple modules that are slotted together to create a functional working program. This has many advantages but one of the major ones is that any module can be replaced or updated at any time without having to update the entire operating system.

It can be difficult to visualise these concepts, but just imagine your web browser as a container into which users can add objects that provide extra functionality. These users don’t need to be programmers either, anybody can download an object from a web server into the container. It could be something like an applet or an Active X component which improves or adds functionality. It could even be something that adds an extra utility to the browser, perhaps an app that performs currency exchange or looks up the websites address or page rank.

This is not new technology but it is slowly changing the world. It might be surprising to hear that Windows NT was actually built on this object orientated technology. Within this system printers, computers and other devices were viewed as objects. It is much easier to see within later versions that use the Active Directory as we can see it more clearly where even users and groups are classed as individual objects.

The definition of an object really is the crucial point in this development. It can be virtually anything, either a parcel of data, a piece of code all these with and external interface which the user can utilise to perform the function. The crucial point is that any, all or some of these objects can be readily combined to produce something of value to the user. All the objects can interact with each other by exchanging data or messages. The client server model which has served the technology space for so long becomes rather outdated to a point. Simply stated any object can become either a client or server (or even both).

Harvey Blount
@Good Proxy Sites

Internet VPNs

An internet VPN can provide a secure way to move data packets across the web if you have the right equipment. There are two basic methods for doing this.

Transport Mode – which is the technique of encrypting only the payload section of the IP packet for transport across the internet. Using this method the header information is left entirely intact and readable by hardware. This means that routers can forward the data as it traverses the internet.

Tunnel Mode – Using this method-IP, SNA, IPX and other packets can be encrypted then encapsualted into new IP packets for transporting across the internet. The main security advantage of this method is that both the source and destination addresses of the original packet are hidden and not visible.

With either case, Internet VPNs trade off the reliability and guaranteed capacity which are available using frame relays or ATM virtual circuits. However the comparative low cost of these internet VPN products make them really popular. You can get low cost UK based VPN for example at a very low costs. Most of the VPN service providers, have recognised the concerns around security and have made security and privacy a priority with product development.

Encryption obviously provides some level of security, however the other layer needed is secure authentication. It is essential that secure authentication protocols are used to ensure that the people or devices at each side of the link are authorised to use the connection.

There are numerous scenario types to these internet VPNs however most commercial ones come into two distinct groups. The first is the site to site connection, which is designed to tunnel large amounts of data between two specific sites. The second relate to virtual services which are usually dial up type connections from individual users generally into a corporate site.

Both of these method will normally both use a local connection into an ISP with the wide area sections covered by the internet. They are distinct options and will be used in very different circumstances. The ‘personal vpn’ market is growing every year particularly due to the increasing filtering and censorship which is becoming standard ion the internet. Using a VPN allows you to both avoid privacy issues but also to avoid the filters and blocks. In China a huge number of people use these services because of the increasing number of blocks employed by the Chinese State.

Harry HAwkins
Website here

Different Types of XML Parser

XML parser is a software module to read documents and a way to give access for their content. XML parser generates a structured tree to return the results to the browser. An XML parser resembles a processor that determines the construction and properties of the data. An XML document can be read by an XML parser to create an output to build a screen sort. There are numerous parsers available and some of these are listed below:

The Xerces Java Parser
The primary function of the Xerces Java Parser is the building up of XML-aware web servers
Also to ensure the integrity of e-business data expressed in XML, James Clark contributed this parser to the community.

XP and XT

XP is a Java XML and XT is an XSL processor. Both are written in Java.XP detects all non well formed files. It plans to function as most rapid conformant XML parser in Java and gives high performance. On the other hand XT is a set of tools for building program transformation systems. The tools include pretty printing; bundling

SAX

Simple API for XML (SAX) originated by the members of a public mailing list (XML-DEV).It gives an occasion based approach to XML parsing. It indicates that instead of going from node to node, it goes from event to event. SAX is event. Events contain XML tag, detecting errors etc, such as this reference.

It’s optimal for a small XML parser and applications that need fast. It must be used when all of the procedure must be performed economically and quickly to input components.

XML parser

It runs on any platform where there’s Java virtual machine. It’s sometimes called XML4J.It has an interface which allows you to have a chain of XML formatted text, decide the XML tags and rely on them to extract the tagged information.

Harold Evensen
http://www.uktv-online.com/

Important Authentication Solutions -Kerberos

Kerberos is one of the most important authentication systems available to developers and network architects.  It’s aim is simple – to provide a single sign on to an environment comprising of multiple systems and protocols.  Kerberos therefore allows mutual authentication and importantly secure encrypted communication between both users and systems.   It’s different too many authentication systems in that it does not rely on security tokens but relies on each user or system to maintain and remember a unique password.

When a user authenticates against the local operating system, normally there is an agent running which is responsible for sending an authentication request to a central Kerberos server.  This authentication server responds by sending the credentials in encrypted format back to the agent.   This local agent then will attempt to decrypt the credentials using the password which has been supplied by the user or local application.   If the password is correct, then the credentials can be decrypted and the user validated.

After successful validation the user is also given authentication tickets which allow them to access other Kerberos- authenticated services.   In addition to this, a set of cipher keys is supplied which can be used to encrypt all the data sessions.  This is important for security which is especially relevant when dealing with a wide range of different applications and systems with a single authentication system.

After validation is completed also, no further authentication is necessary – the ticket will allow access until it expires.   So although the user does need to remember a password to authenticate, only one is required to access any number of systems and shares on the network.  There are a lot of configuration options to finely tune Kerberos particularly in a Windows environment where Kerberos is used primarily to access Active Directory resources.  You can restrict access based on a whole host of factors in addition to the primary authentication.  It’s effective in authentication in a fluid environment where users may log on to many different systems and applications, even when these systems can keep changing their IP address (note: http://www.changeipaddress.net/ )

There is one single reason that Kerberos has become so successful, it’s because it’s freely available.  Anyone can download and use the code free of charge, which means it’s widely utilised and is constantly developed and improved too.  There are many commercial implementations of Kerberos such as from Microsoft and IBM (Global Sign On) these normally have additional features and a management system.  There have been concerns over various security flaws in Kerberos however because it is open source these have all been fixed in the latest implementation Kerberos V.

George Hempseed

Author: BBC iPlayer in Ireland

Command Line Utilities for Troubleshooting DNS

There are of course, many tools for configuring, installing and troubleshooting DNS issues, many can make life an awful lot easier.   Anyway here’s some of the perhaps most popular ones which exist in various platforms.

nslookup

This utility is probably the oldest and most widely used DNS tool available.  IT’s primary functions are to run individual and specific queries on all manner of resource records.  It is even possible to perform zone transfers using this tool, which is why it’s so important.

ipconfig

This tool is often used daily to release and renew DHCP addresses.  However it can also be used to perform some DNS functions, it’s certainly a useful client tool to get to grips with. There are a couple of very useful switches which supply DNS related functionality.  The /displaydns switch will return the contents of the client resolver cache.   It will show you the Record Name, TYpe TTL, Data Length and RR Data.  It will use cache data to return these records at least until the TTL expires when it will query a name server.  The /flushdns switch is used for erasing the contents of the resolver cache.  In troubleshooting terms this means that cached data will not be used and a fresh request will be sent to a name server.   Finally /registerdns which will refresh it’s DHCP lease and network records.

netdiag

One of the most useful general diagnostic tools you will find in a Windows environment.  It performs a long list of network connectivity tests, including a specific DNS test.   Using the switch /test:DNS the program will check each active network card and see whether it has a A record registered in the domain.  The additional switch /DEBUG can be used in conjunction with this to produce a verbose output to the screen which is extremely helpful in troubleshooting DNS issues.  It can be found in the Windows support tools directory which is on the installation disks and shares.   It’s surprisingly useful when checking a DNS service or programs.

dnsdiag

This useful utility is especially useful in checking through email issues that are DNS related. A DNS misconfiguration can cause all sorts of email issues as many have experienced.   It functions by simulating all the DNS related activities which would be done by an SMTP agent when delivering email  There is a caveat in it’s use for this sort of diagnostic work, you’ll need to run it on a computer which has either and Exchange or SMTP agent installed locally.

Most of these tools can be used to solve a huge range of DNS related issues, so they’re worth getting to grips with.  A great test is to use them with a new installation, or DNS design, perhaps run through the tools to check out that DNS is working properly.

Additional DNS Resource

DNS Messages

If you want to write programs that can utilize DNS messages then you must understand the format.  So where will you find all the queries and responses that DNS uses to resolve addresses?  Well the majority are mostly contained within UDP, each message will be fully contained within a UDP datagram.  They can also be relayed using TCP/IP but in this instance they are prefixed with a 2 byte value which indicates the length of the query or response.  The extra 2 bytes are not included in this calculation – a point which is important!

All DNS communication exists with a format simply called a message.  Every different function in DNS from simple queries to Smart DNS functions will all use this very same format.  The format of the message follows this basic template –

  • Header
  • Question    – For the Name Server
  • Answer  – Answering the Question
  • Authority – Point Towards Authority
  • Additional – Additional Information

Some sections will be missing depending on the query, however the header will always be present.  This is because within the header you’ll find fields which specify which of the remaining sections are indeed present, also whether the message is a query or a response and finally if there are any specific codes present.

Each name of the sections following the header are derived from their actual use, it’s all pretty common sense stuff.  The Question section is indeed a question directed at a Name Server, within this section are fields which define the question.

  • QTYPE – Query Type
  • QCLASS – Query Class
  • QNAME – Query Domain Name

Specifically if you are programming or developing any application which relies on this functionality like the best Smart DNS service for example it is important to understand these classes properly.   Also programmers will need to understand the specific format of the classes.  The QNAME represents the domain name being queried as a sequence of labels.  Each one of these labels consists of a length octet followed by a number.