Introduction to Object Technologies

Object orientated technology has brought software development past procedural programming and into a world where programmes can be readily reused. This of course vastly simplifies development of applications because any programmer can leverage existing modules quickly and efficiently. OPerating systems and applications are created as multiple modules that are slotted together to create a functional working program. This has many advantages but one of the major ones is that any module can be replaced or updated at any time without having to update the entire operating system.

It can be difficult to visualise these concepts, but just imagine your web browser as a container into which users can add objects that provide extra functionality. These users don’t need to be programmers either, anybody can download an object from a web server into the container. It could be something like an applet or an Active X component which improves or adds functionality. It could even be something that adds an extra utility to the browser, perhaps an app that performs currency exchange or looks up the websites address or page rank.

This is not new technology but it is slowly changing the world. It might be surprising to hear that Windows NT was actually built on this object orientated technology. Within this system printers, computers and other devices were viewed as objects. It is much easier to see within later versions that use the Active Directory as we can see it more clearly where even users and groups are classed as individual objects.

The definition of an object really is the crucial point in this development. It can be virtually anything, either a parcel of data, a piece of code all these with and external interface which the user can utilise to perform the function. The crucial point is that any, all or some of these objects can be readily combined to produce something of value to the user. All the objects can interact with each other by exchanging data or messages. The client server model which has served the technology space for so long becomes rather outdated to a point. Simply stated any object can become either a client or server (or even both).

Harvey Blount
@Good Proxy Sites

Internet VPNs

An internet VPN can provide a secure way to move data packets across the web if you have the right equipment. There are two basic methods for doing this.

Transport Mode - which is the technique of encrypting only the payload section of the IP packet for transport across the internet. Using this method the header information is left entirely intact and readable by hardware. This means that routers can forward the data as it traverses the internet.

Tunnel Mode - Using this method-IP, SNA, IPX and other packets can be encrypted then encapsualted into new IP packets for transporting across the internet. The main security advantage of this method is that both the source and destination addresses of the original packet are hidden and not visible.

With either case, Internet VPNs trade off the reliability and guaranteed capacity which are available using frame relays or ATM virtual circuits. However the comparative low cost of these internet VPN products make them really popular. You can get low cost UK based VPN for example at a very low costs. Most of the VPN service providers, have recognised the concerns around security and have made security and privacy a priority with product development.

Encryption obviously provides some level of security, however the other layer needed is secure authentication. It is essential that secure authentication protocols are used to ensure that the people or devices at each side of the link are authorised to use the connection.

There are numerous scenario types to these internet VPNs however most commercial ones come into two distinct groups. The first is the site to site connection, which is designed to tunnel large amounts of data between two specific sites. The second relate to virtual services which are usually dial up type connections from individual users generally into a corporate site.

Both of these method will normally both use a local connection into an ISP with the wide area sections covered by the internet. They are distinct options and will be used in very different circumstances. The ‘personal vpn’ market is growing every year particularly due to the increasing filtering and censorship which is becoming standard ion the internet. Using a VPN allows you to both avoid privacy issues but also to avoid the filters and blocks. In China a huge number of people use these services because of the increasing number of blocks employed by the Chinese State.

Harry HAwkins
Website here

Different Types of XML Parser

XML parser is a software module to read documents and a way to give access for their content. XML parser generates a structured tree to return the results to the browser. An XML parser resembles a processor that determines the construction and properties of the data. An XML document can be read by an XML parser to create an output to build a screen sort. There are numerous parsers available and some of these are listed below:

The Xerces Java Parser
The primary function of the Xerces Java Parser is the building up of XML-aware web servers
Also to ensure the integrity of e-business data expressed in XML, James Clark contributed this parser to the community.

XP and XT

XP is a Java XML and XT is an XSL processor. Both are written in Java.XP detects all non well formed files. It plans to function as most rapid conformant XML parser in Java and gives high performance. On the other hand XT is a set of tools for building program transformation systems. The tools include pretty printing; bundling

SAX

Simple API for XML (SAX) originated by the members of a public mailing list (XML-DEV).It gives an occasion based approach to XML parsing. It indicates that instead of going from node to node, it goes from event to event. SAX is event. Events contain XML tag, detecting errors etc, such as this reference.

It’s optimal for a small XML parser and applications that need fast. It must be used when all of the procedure must be performed economically and quickly to input components.

XML parser

It runs on any platform where there’s Java virtual machine. It’s sometimes called XML4J.It has an interface which allows you to have a chain of XML formatted text, decide the XML tags and rely on them to extract the tagged information.

Harold Evensen
http://www.uktv-online.com/

Proxy Protocol Verification

Any circuit level tunneling through a proxy server such as SOCKS or SSL, will allow most protocols to be passed through a standard proxy gateway. Whenever you see a statement like this, you should remember that it implies that the protocol is not actually understood but merely transparently transmits it. For instance, the popular tunneling protocol SSL is able to tunnel virtually any TCP based protocol without problem, it’s often used to add some protection for weak protocols like FTP and Telnet.

But it can create a little bit of a headache for a proxy administrator. Not only can all sorts of protocols be allowed access to a network but often the administrator has no knowledge of the contents due to encryption. There are some short terms solutions which will provide a limited amount of protection - for example blocking access based on port numbers. That is only allow specific ports to be tunneled such as 443 for HTTPS, 636 for secure LDAP. This can work well but remember some advanced security programs like Identity Cloaker allow the configuration of the port, allowing protocols and applications to be tunneled on non standard ports -a bit like a proxy unblocker . It is therefore not an ideal solution and one that cannot be relied upon in the longer terms to keep a network and proxy secure.

The obvious solution of course is to utilise a proxy server that can verify the protocol that is being transmitted. This requires an awful lot more intelligence built into the proxies but it is possible. It does require a bigger overhead, it does make the proxy server more expensive and perhaps more complicated and trickier to manage. However without this sort of intelligence or something similar you will get the possibility of an FTP session being set up through an SSL tunnel for example.

In some ways proxies already do some of this, and protocols that are proxied rather than tunneled at the application level cannot be exploited like this. Examples include HTTP, FTP and even Gophur cannot be used to trick entry, simply because there is no ‘dumb’, direct tunnel the proxy understands and will only relay legitimate responses.

Identity Systems - Distributing Authentication and Authorization

Any identity system which is automated needs some way of both creating and distributing authorization and authentication assertions. One of the most famous is of course Kerberos, which has it’s own methods for dealing with this requirement. However many digital systems are now starting to use SAML - the Security Assertion Markup Language - it’s becoming the de facto security credential standard.

SAML of course uses XML as a standard to represent security credentials, but it also defines a protocol for requesting and receivfing the credential data from an authority services (SAML based). One of the key benefits to SAML is that using it is pretty straight forward, this fact alone has increased it’s usage considerably. A client will make a request about a subject through to the SAML authority. The authority in turn makes assertions about the identity of the subject in regards to a particular security domain. To take an example - the subject could be identified by an email address linked to it’s originating DNS domain, this is just one simple example though.

So what exactly is a SAML authority? Well it is quite simply a service (usually online) that responds to SAML requests. These SAML requests are called assertions. There are three different types of SAML authorities which can be queried - authentication authorities, attribute authorities and policy decision points (PDPs). These types of authorities all return distinct types of assertions -

  • SAML authentication assertions
  • SAML attribute assertions
  • SAML authorization assertions

Although there are three different definitions here, in practice most authorities are set up to produce each type of assertions. Sometimes in very specific applications, you’ll find an authority that is designed to only produce a specific subset but this is quite rare especially in online applications - although they’re sometimes used as proxy authorisation - see this. All of them contain certain elements however like IDs for issuers, time stamps, assertion IDs, subjects including security domains and names.

Each SAML attribute request will begin using a standard syntax - <samlp:Request…..> - the content then would refer to the specific parts of the request. This could be virtually anything but in practice it’s often something straight forward like asking which department or domain an email is associated with.

Source : Sam Wilkin - IT Consultant

 

Confidentiality Using XML Encryption

Just like every other type of communication method that exists online, you can use encryption for securing XML documents. In fact it is recommended if possible that all important XML documents should be encrypted completely before being transmitted across the wire. The document would then be decrypted using the appropriate key when it reaches it’s correct destination.

There is a problem with this however, in that when you encrypt something you also obfuscate the entire message. This means that unfortunately some parts of an XML message will need to be sent using clear text only. Take for example SOAP messages, these are a format that computers use to exchange rpc (remote procedure calls) over the internet. Although you can encrypt certain parts of the SOAP message, at a minimum the headers must be in clear text otherwise intermediary devices would not be able to see routing and other important information.

The other alternative is to encrypt the channel itself, typically using something like SSL or SSH. This ensures that the message is protected in transit by encrypting the entire channel. However there is another issues here that channel encryption only protects the two endpoints, the message will otherwise be displayed in clear text. These problems were real issues for XML developers and to combat them - the XML encryption standard was developed.

The primary goal of this standard is to allow the partial and secure encryption of any XML document. The encryption standard, very much like other XML standards like the signature protocol has quite a lot of different parts. This is to enable the standard to deal with all sorts of different contingencies, however the core functions are quite simple and easy to follow.

Any encrypted element in an XML document is identified using the following element - , this element consists of two distinct parts -

  • An optional element that gives information. The element is actually the same one that is defined in the XML signature specification.
  • A element that can either include the actual data which is being encrypted inside the element. Alternatively it can contain a reference to the encrypted data enclosed in a element.

For instance XML encryption may be used in something like an online payment system which sends orders through an XML document. The order document may contain all the information about the order including sensitive information like the payment details, credit card numbers all contained in a element. In this example most of the order should be left in clear text so that it can be processed quickly, but the payment information should be encrypted and decrypted only when the payment is actually being processed. XML encryption allows this facility by ensuring the specific encryption of certain parts of the document - i.e the payment information.

James Hassenberg: Technical Blogger.

 

Social Networks to Form Disaster Warning Systems

Japan is planning a brand new disaster warning system this year. It will be based on social networking sites and has been drafted in the terrible aftermath of the 2011 disasters. One of the major problems in the aftermath of a disaster is coordination and communication. Natural disasters tend to destroy telephone lines and standard emergency lines.

The internet is however a lot more resilient than a fixed copper wire connecting a phone system. There are many ways to connect to the internet and it is virtually impossible to completely destroy it’s infrastructure. We all have different ways to connect and many of us routinely use social networking sites every day.

The test will focus on the most popular social sites in Japan which are Twitter and a local site called Mixi. All these sites can be accessed by computers, laptops, mobile phones and a host of other devices. The test is initially to help create some ground rules for the communication and ensure you don’t end up with things like false disaster reports creating havoc and panic.

Obviously power cuts and broken telecoms infrastructure will still have an impact, but it is hoped that the resilient nature of the internet will be able to overcome some of these difficulties. The scheduled test will simulate a disaster and see how people use their mobiles and other devices to communicate.
There is great hope for these system and using something like Twitter does seem a sensible option for mass communication. There are issues that may have an impact in many countries though not least the increasing number of restrictions that are put on internet access.

There are worries that these filters that require people using proxies which will end up harming some of the advantages of the internet as a communication medium. This combined with the many companies who are imposing blocks and restrictions on a commercial basis. Even public funded companies like the BBC block access to their site - you need to use proxies to watch BBC Iplayer abroad too - see this watch iplayer abroad

Fortunately you don’t need a Japan proxy yet as they do not currently block or censor the internet to any great extent but many countries do. Whether this will pose to be real issue or not is hard to say, but it’s certainly a concern.

Performance Issues - For Web Servers and Proxies

One of the most important issues that may affect your servers performance is how they deal with DNS. The look ups that your server has to make is something that will affect the speed of your web server or proxy. So in most circumstances, a DNS lookup is used to find the IP address that the server should connect to when retrieving a URL. In many instances this is not an issue, for example if all the content is stored on the local host - then no lookups are required. But often web content contains lots of links and images stored on other servers. For a web server this can affect performance, but for a proxy server it can almost bring the server to it’s knees.

Unfortunately for many servers there’s no solution to performing DNS lookups - basically it’s the way the internet works. For a proxy server it’s even more problematic, for an active connection there will be hundreds of request, both normal and reverse DNS lookups. If you multiply this by a few hundred or even thousand clients then you can imagine the potential impact on your server. The lookups are unavoidable, so to increase performance for DNS and for a server overall you should look at DNS caching.

You should always install this feature whenever possible, it allows a server to internally remember a series of IP addresses in order to resolve requests instantly. It can have a huge affect on performance - the servers can avoid many DNS lookup requests and thus avoid the latency and impact of them. But remember the DNS lookup is only obligatory when the requester actually needs to connect to the source. If you have lost of hosted images from other sites, make sure that your web server load them and doesn’t wait endless requests from other sites.

For proxy servers this stuff is even more important, they’ll get loads more requests. Take for example this instance of a infrastructure of proxy servers designed to encrypt and protect your identity. The security has to be incredibly secure, but in reality people don’t use services that slow their connections down - just look at this video how to find a fast proxy server.

DNS caching can take place anywhere that DNS lookups are required. You can also use it as a tool to protect a server’s resources - for example don’t use up server resources repeatedly looking up bad addresses. Make sure that you install some sort of system that ignores repeated DNS requests for bad servers. If it can’t be resolved and can’t be cached then it should be ignored.

Proxy and Web Authentication Methods - Cookies

When HTTP authentication is required by a web server, then this authentication takes place for every single request. So for every single request that the web server receives it must decode the message, find the username and passwords then verify these with the ones in it’s user database (if this is the method being used). Naturally this takes a lot of effort and the most obvious result is that of speed, the connection will slow down to allow all this processing.

There are other methods to circumvent this difficulty with HTTP and the most popular is probably the ‘cookie’. What will likely happen is that if a request is received with no authentication credentials and without a cookie then the user receives a 401 request - (401 - is authentication required). The client browser functioning in normal mode, and not the privacy enabled sessions like incognito in Chrome will remember which servers require authentication and which won’t. This enables the client to send the authentication credentials automatically, thereby saving the inconvenience of another 401 response.

Of course there are other authentication methods, for example the securID cards have passwords that change each time, in this case there is no alternative but for the user to enter his password on each request. One of the most common solutions for preventing this is by passing a cookie after a successful authentication request. Any subsequent requests the cookie can be forwarded, most servers will accept this file as a valid authentication credentials.

The information must be secure in the cookie, typically encoded and then verified with an MD5 signature. This stops the cookie being altered or modified in transit, the other information that would be normally included in the file would be

  • User ID
  • IP Address of Origin
  • Cookie Expiration Time
  • Cookie Signature/Fingerprint

Part of this data will be encrypted and other parts like the expiration and IP address will usually be in clear. This clear text data and the MD5 portion of the file can be used to verify the cookie’s validity along with a random string that is generated and passed when the cookie is originally created.

This transparent pass through is important in many applications, a well configured proxy must be able to handle these requests easily. Unfortunately normal cookies cause issues for use with proxies as they are designed to be exchanged between client and server end points. Take for instance this instance where you use a proxy to watch UK TV abroad as in this video -

Using such services might mean that your IP address changes during the connection, which will effectively invalidate the cookie. This means that either the session is disconnected or re-authentication must occur. If the proxy can handle these connections properly then the cookie will remain valid - it can be quite difficult to configure though.

Some IP Routing Stuff

Conceptually IP routing is pretty straight forward, especially when you look at it from the hosts point of view. If the destination is directly connected such as a direct link or on the same Ethernet network then the IP datagram is simply forwarded to it’s destination. If it’s not connected then the host simply send the datagram to it’s default router and lets this handle the next stage of the delivery. This simple example illustrates most scenarios.

The basis of IP routing is that it is done on a hop-by-hop basis. The Internet Protocol does not know the complete route to any destination except those directly connected to it. IP routing relies on sending the datagram to the next hop router - assuming this host is closer to the destination until it reaches a router which is directly connected to the destination.

IP routing performs the following -

I) Searches the routing table to see if there is a matching network and host ID. If there is the packet can be transferred through to the destination.

II) Search the routing table for an entry that matches the network ID. It only needs one entry for an entire network and the packet can then be sent to the indicated next hop.

III) If all other searches fail then look for the entry marked - ’default’. The packet then is sent to the next hop router associated with this entry.

If all these searches fail then the datagram is marked undeliverable. In reality most searches will fail the initial two searches and be transferred to the default gateway which could be a router or even a proxy site which forwards to the internet.

If the packet cannot be delivered (usually down to some fault or configuration error) then an error message is generated and sent back to the original host. The two key points to remember is that default routes can be specified for all packets even when the destination and network ID are not known -like this.

The ability to specify specific routes to networks without having to specify the exact host makes the whole system work - routing tables thus contain a few thousand destinations instead of several million!!