In the modern environment where one should be using VMs for everything we find ourselves wanting to save on disk, memory and CPU usage wherever we can.  To this end, many of us may choose to set up our Windows servers in a headless configuration (Windows Core).  According to Microsoft’s own documentation, this can save as much as 4GB of disk space alone.  Outside of doing this to force yourself to learn PowerShell, this isn’t terribly useful for a single server however if you have many Windows 2016 servers this can start to add up rather quickly.  As you gain more servers this reduction and the need to use PowerShell to automate tasks becomes more and more important.

I’m going to start assuming that you’ve already installed Windows Server 2016 with or without the full GUI.

Note: If you are using Windows Server 2016 Core you will need to open a PowerShell console each time the system boots by typing `powershell` at the console after you log in.  If you have the full GUI then you will need to start PowerShell as well, I encourage you to Right-Click the Start Menu and choose PowerShell.

The first thing you need to do is to setup a static IP address for your DC.  To do this you need to know the name of the interface to your local network.

Get-NetIPAddress

This should list all of your network interfaces, find the correct one for your server/network and take note of its `InterfaceAlias` this is the name of the adaptor.  In my case, my Servers are in the 10.0.50.0 – 10.0.50.63 range (/26) this new server will be 10.0.50.18 and the adaptor that the server will use on the internal network is known as “Ethernet0”.

New-NetIPAddress -Interface “Ethernet0” -IPAddress “10.0.50.18” -PrefixLength 26 -DefaultGateway “10.0.50.0”

Now set the name of the system, in my case, this will be DC02 and its FQDN will be DC02.example.com

Rename-Computer -NewName “DC02”
Restart-Computer

If you have an existing domain, it is easier at this stage to join the new server to the domain then to make it a DC.  The Server parameter is optional in this command but if you need for any reason to specify the server to use do it here.

Add-Computer -DN internal.example.com -Server dc01.internal.example.com

Now you should be able to see this server if you have a look in Active Directory Users and Computers, however, it’s not in the Domain Controllers OU because it isn’t yet a Domain Controller.  Before we can do that, however, we need to add the Active Directory Domain Services role to this server.

Add-WindowsFeature -name AD-Domain-Services

Now that we have ADDS install we can make this server a DC!

Install-ADDSDomainController -DomainName “internal.example.com” -Credential (Get-Credential)

This will prompt for your username and password, the user you use here must have Enterprise Admin  (or possibly have the required roles delegated?). Over the next few minutes, depending on the size of your domain/forest, the new DC will sync with the master.  Once the sync is done the DC should be up and ready for use.

While working on a Windows XP system today I had to check the system for corrupted system files. Typically we’d just run sfc from the command line, however, this system’s dllcache was corrupted or missing.  On a normal system, this wouldn’t be a problem, you’d insert the Windows XP install media this device, however, didn’t have an optical drive, nor would WFP accept the ISO we’d mounted with WinCDEmu.  It turns out there is a registry key that indicates the location of the cache, which in our case was set to the location of the optical drive that was no longer present, the drive letter of which was currently mounted to a data drive.

hkey_local_machine\SOFTWARE\Microsoft\Windows\CurrentVersion\Setup\SourcePath

and

hkey_local_machine\SOFTWARE\Microsoft\Windows\CurrentVersion\Setup\ServicePackSourcePath

Changing these to reflect the location of our mounted ISO and restarting the scan resolved the issue and we were able to find and repair the corrupted files.

Yes, we’d like to replace this system altogether, none of us like having to run an out of support 16-year-old operating system.  However, as it often goes, this system is still critical and we can’t quite get rid of it yet.

This article is quite outdated and recent changes to the way in which APT handles caching mean that for the most recent versions this no longer works.

 

Bandwidth it getting cheaper, but if you have even a moderately sized network either in a self managed rack, in a colo or at Amazon’s EC2 there is a good chance that bandwidth is a good bit of your monthly bill. Maybe you have a lab in your office or home with several servers, and would like to either save on bandwidth costs or just want faster updates. Both of these scenarios can benefit from an Apt Cache/Proxy. There are a few options that you have one is to create a local mirror of one or two of the common package repositories, this is best if you have a wide range of systems (desktops, laptops and servers) which often have changing package requirements. This solution requires you to host a large amount of data (several hundred GB last time I checked, I’m sure its much larger now), this basically entails you downloading the entire repository and updating your /etc/apt/sources.list to use that server as the repository. For server infrastructure this isn’t usually the best route to take. With servers what is most common is that you have a relative few packages (Apache, PHP, MySQL, SSH on a typical LAMP), these packages exist on all (or most) of your servers and they are one of the few things to actually change from time to time. In this case, and Apt Cache/Proxy is the best route, this is what I’m going to demonstrate today. We will be creating a proxy for apt-get which will cache the packages it sees.

 

Required Tools:

  • Server running Debian or a Debian based distribution such as Ubuntu, with ~50GB of free space[1].
  • Internet Connection

The first thing we need to do is to install the apt-cacher package

apt-get install apt-cacher

Now the easy work is over ;). Lets get the config file setup.

Edit /etc/apt-cacher/apt-cacher.conf

 

The config file should contain a few important directives first is cache_dir this is the directory that apt-cacher will use to store the cached packages, you can place it anywhere you like, I don’t put it my /home folder just to make sure it doesn’t end up in my backups.

cache_dir=/var/apt-cache
#The port apt-cacher will use, don't forget to open it in the firewall if need be
daemon=3142
admin_email=root@localhost #email address displayed on error pages
group=www-data #group to use for file permissions
user=www-data #user to use for file permissions
#ip masks that are allowed (or denied) to use use this cache
#I keep it set to * which allows any host, but I bind my cache to an interface that is only available to my VPN or internal network
#192.168.0.2, 192.168.0.3
#192.168.0/16
allowed_hosts=*
denied_hosts=
#same as above, but with IPv6
allowed_hosts_6=fec0::/16
denied_hosts_6=
#apt-cacher can generate reports about cache usage, size and hits/misses.
#The report is accessible via http://[host]/apt-cacher/report
#set to 0 to disable
generate_reports=1
#apt-cache will clean the cache every 24 hours of packages that no longer exist in the repositories
#set to 0 to disable
clean_cache=1
#Set this to 1 to prohibit apt-cacher from fetching new files (it will only provide files that are cached)
offline_mode=0
#log file
logdir=/var/log/apt-cacher
#The number of hours to keep a file in cache before it is requried to be downloaded again
#0 means that apt-cacher will check to see if the file in the repository is newer than the cached one (based on HTTP headers)
expire_hours=0
#if you need to proxy your outbound connections set use_proxy and use_proxy_auth accordingly

#http_proxy=proxy.example.com:8080
use_proxy=0
#http_proxy_auth=proxyuser:proxypass
use_proxy_auth=0

#use this to bind apt-cacher to a specific interface for the outbound connection (for fetching packages)
interface=

# Use 'k' or 'm' to use kilobytes or megabytes / second: eg, 'limit=25k'.
# Use 0 or a negative value for no rate limiting.
limit=0

#if you want a lot more data in your logs set this to 1
debug=0

That wasn’t actually too hard was it. Lets set apt-cacher to auto start with the server; edit /etc/default/apt-cacher set AUTOSTART=1 Restart apt-cacher

/etc/init.d/apt-cacher restart

And were done with the server. Now we just need to configure the other systems to use the cache/proxy.

So now on all of our systems that are going to use this cache edit (or create) the file /etc/apt/apt.conf.d/90-apt-proxy.conf, add just one line.

Acquire::http::proxy "http://[ip or hostname]:3142";

Edit the port if you did so above (“daemon=” line of the config file), and you should be good to go!

The amount of free space will depend on how many packages you use.  For my typical test server running a LAMP config, I use about 500MB of disk space for the cache.

Amazon’s EC2 provides a great platform for your applications, it can allow you to scale your application seemingly infinitely. This is great, however there are people like myself who like to have more control over the servers we run. Amazon provides a decent set of pre-built systems, mostly ready to use, there are also a vast selection of community images, however I don’t much like these, call me paranoid but one just doesn’t know whats installed. I feel that the best option is to build your own image, the process, although not for a novice, is fairly straight forward. I’m going to attempt to detail what I feel is the best process, this will be based on Debian, and the process will require an existing Debian system.

What you’ll need:

  • Working Debian System – You could use any other distro, but you’ll have to do some research to replace some of these commands
  • Amazon AWS Account (Create one free @http://aws.amazon.com/)
  • Amazon Account Number (#1)
  • Access Key ID and Secret Access Key (#2 and 3)
  • x.509 Cert, you’ll need both files pk-xxxx and cert-xxxx
  • About 2-3 hours depending on your internet connection.

First thing is to install some required packages.

apt-get install debootstrap ruby sun-java6-bin libopenssl-ruby curl zip

 

Now lets download and setup some packages that are necessary to bundle and upload the AMI.

mkdir /opt/ami_tools
mkdir /opt/api_tools
mkdir ~/ami_tools
cd ~/ami_tools
wget http://s3.amazonaws.com/ec2-downloads/ec2-ami-tools.zip
unzip -d ./ ec2-ami-tools.zip
cp -R ./ec2-ami-tools-1.4.0.5/* /opt/ami_tools/
#note that your version number may be different than this.
mkdir ~/api_tools
cd ~/api_tools
wget http://s3.amazonaws.com/ec2-downloads/ec2-api-tools.zip
unzip -d ./ ec2-api-tools.zip
cp -R ./ec2-api-tools-1.5.2.4/* /opt/api_tools/
#note that your version number may be different than this.

#add the ami-tools and api-tools to the $PATH var
PATH=$PATH";/opt/ami_tools/bin;/opt/api_tools/bin"

#this bit here is not necessary but I like to clean up
cd ~
rm -Rf ami_tools/
rm -Rf api_tools/

 

Next we need to create a disk image for our AMI, you can use most any size you want from ~350MB (roughly the smallest Debian/Apache server I’ve been able to build) to sever hundreds of GB (I don’t know where the roof is). I’m using a VM to build the AMI it has a second disk attached to it which is where I store the images it is mounted at /ami we’ll assign this to variable so that you can copy and past most of my commands. First lets assign the name of our image to a variable so you can copy and past, this also helps reduce typos.

export AMI_Name=debian_lamp
export AMI_Path=/ami/$AMI_Name
#change this to fit your environment.

 

Now lets create what will become our virtual disk.

dd if=/dev/zero of=$AMI_Path count=10240 bs=1M

 

The output should be something like

10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 36.9953 s, 290 MB/s

 

This will create a file of all zeros whose size which 10240 blocks of 1 MB (~10GB), you can adjust this to be however large you’d like. This will take a few minutes, when it’s done let create a file system on the new file.

mkfs.ext3 -F $AMI_Path

 

the output should be something like

mke2fs 1.41.3 (12-Oct-2008)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
655360 inodes, 2621440 blocks
131072 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2684354560
80 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:

 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 31 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.

 

If you don’t want to use ext3 you can choose most any other option. Now that we’ve got a filesystem its time to mount this disk and get the operating system setup.

mkdir /chroot
mount -o loop $AMI_Path /chroot

 

 

 

Verify that it is mounted and that everything looks good:

mount
 #output:
 /ami/debian_lamp on /chroot type ext3 (rw,loop=/dev/loop1)
df -H
 #output:
 /ami/debian_lamp 11G 158M 9.9G 2% /chroot

 

Now we can bootstrap Debian unto this disk

debootstrap --arch i386 lenny /chroot/ http://ftp.debian.org

 

This one is also going to take a while, it will download and install all of the packages needed for a base system to run, except for the kernel. Once that is finished we’ll enter the disk with

chroot /chroot

Now everything we do here effects only the virtual disk, which will become our AMI, not our host system. Now that we’ve created an environment, we can start preparing that environment for Amazon and we can install some of our software now.

chroot /chroot

 

Now, until we leave the chroot everything we do only affects only that environment. Lets first setup some of the ‘devices’ that are both required, and nice to have.

mount /proc
cd /dev
MAKEDEV console
MAKEDEV std
#You'll need these to mount Amazon EBS Volumes.
mknod /dev/sdf b 8 80
mknod /dev/sdg b 8 80
mknod /dev/sdh b 8 80
mknod /dev/sdi b 8 80
mknod /dev/sdj b 8 80
mknod /dev/sdk b 8 80
mknod /dev/sdl b 8 80
mknod /dev/sdm b 8 80
mknod /dev/sdn b 8 80
mknod /dev/sdo b 8 80
mknod /dev/sdp b 8 80

 

Now lets set a root password

passwd

 

Setup both networking, and fstab

echo -e 'auto lo\niface lo inet loopback\nauto eth0\niface eth0 inet dhcp' >> /etc/network/interfaces
echo -e '/dev/sda1 / ext3 defaults 0 1\n/dev/sda2 swap swap defaults 0 0' > /etc/fstab

There are a few packages which will make things easier and at least one which is required (ssh).

apt-get install ssh locales
#locales & tzconfig make it easier to install other packages
dpkg-reconfigure locales
dpkg-reconfigure tzconfig

 

At this point you can install any packages you need, if this is going to be a LAMP system:

apt-get install apache2 php5 php5-mysql libapache2-mod-php5 mysql-common mysql-server

 

You may also choose to add any code that you would like to have. If you are running a website you can upload it. When you’ve installed all of the packages that you want, leave the chroot, and unmount it.

exit
umount -l /chroot

 

You’ll need to upload both the certificate files that you downloaded from Amazon (pk-xxx & cert-xxx) and we’ll assign some variables to them (I keep mine in ~/.ec2)export EC2_PRIVATE_KEY=~/.ec2/pk-xxxx.pem

export EC2_CERT=~/.ec2/cert-xxxx.pem 

 

Set the Java environment variables.export JAVA_HOME=/usr/

If your not sure where java is runwhereis java

Were almost done, I promise ;).

 

So now were going to bundle the image into an AMI and upload it to Amazon. To do this we’ll set a few variables to make things a bit easier.#These are fake I promise, although if you really want to try feel free.

export EC2_ACCOUNT_NUM=12345678901
export EC2_ACCESS_KEY=qwerty123
export EC2_SECRET_KEY=321ytrewq
export EC2_HOME=/opt/ami_tools

 

Lets create a variable to store the name of the S3 bucket were going to upload to.

export S3_BUCKET=ami_bucket

 

Now lets create the bundle, this is done by the Amazon AMI tools package we downloaded earlier. This process is going to take a while and is going to consume some disk space in /tmp/.

ec2-bundle-image -i $AMI_Name --cert $EC2_CERT --privatekey $EC2_PRIVATE_KEY -u $EC2_ACCOUNT_NUM

 

Now we’ll begin the upload process, this is also done by the Amazon AMI tools package we downloaded. This is also going to take a while depending on the size of your image and the speed of your internet.

ec2-upload-bundle -b $S3_BUCKET-m /tmp/$AMI_Name.manifest.xml -a $EC2_ACCESS_KEY -s $EC2_SECRET_KEY

 

Now we can use the EC2 API to start an instance of our new image so that we can test it.

export EC2_HOME=/opt/api_tools

 

We can register the AMI which will allow us to create an instance of it.

ec2-register $S3_BUCKET/$AMI_Name.manifest.xml

 

The result from that command will include the AMI Id of the new AMI, it will look like ‘ami-s35123’. Now lets run an instance of this image.#Be sure to use the AMI ID you got from the ec2-register command

ec2-run-instances ami-s35123

 

This command will also output some important information, including the instance id in the format ‘i-chd3382’. We can use this id to check the status of the instance.

#Again use the id from above.
ec2-describe-instances i-chd3382

 

If this isn’t your first AMI your going to need to open port 22 so that you can connect to ssh on the new server.

ec2-authorize default -p 22

 

You should now be able to connect to the server, you can get the ip from the ‘ec2-describe-instances’ command and with the same command you’ll see when it’s online and ready to use (it’s status will be ‘running’).

A bash script to search recursively through the entire current directory looking for the specified text. I’m just posting this because it’s something I use fairly regularly.

#!/bin/bash
lookingFor=$1
target=$2
MINLEN=${#lookingFor}
i=0
for f in `find $target ! -type d`;
do
 if [ $f == $0 ]; then
j=12
 else
line=`grep -i "$lookingFor" $f | sed 's/^[ \t]*//'`
#echo $line
len=${#line}
if [ "$MINLEN" -lt "$len" ]; then
 echo "$f:`grep -n "$lookingFor" $f | cut -f1 -d:`"
 echo "$line"
 echo ""
 i=$(($i+1))
fi
 fi
done

echo $i" files found with: "$lookingFor

Save this file I like to call my ‘search’ and set it to executable;

chmod +x search

To use it just

./search needle haystack

In software development, most often when we want to generate a random number we have to generate a Pseudo-random Number. However, there are times when you need a true random number, weather for cryptographic purposes or for lotteries. To generate true random numbers you need a True Random Number Generator (TRNG), which must be a piece of hardware. There are several on the market which use a variety of methods to generate their randomness. I used the circuit developed by Rob Seward, which uses avalanche noise between the two transistors… I’ll let Rob Seward explain it in his own words.

The two transistors with their bases touching create “avalanche noise in a reverse-biased PN junction.” This noise is amplified by the third transistor and sent across a voltage divider to Arduino. On the oscilloscope, the signal looks very fuzzy. In software, that randomness is converted into a stream of 1s and 0s.

Since I love my Netduino, and I love building cool stuff, I decided to build one of these for my Netduino.

 

The code for it is fairly straight forward

using System;
using System.Threading;
using Microsoft.SPOT;
using Microsoft.SPOT.Hardware;
using SecretLabs.NETMF.Hardware;
using SecretLabs.NETMF.Hardware.Netduino;

namespace TRNG
{
public class Program
{
#region configuration
//Configuration
//8 Bits gives you an int output of 0-255
//16 bits gives you an int output of 0-65535
//32 bit gives you a stackoverflow. :)
private static int bits = 8;
//End Configuration
#endregion

#region ports
private static OutputPort led = new OutputPort(Pins.ONBOARD_LED, false);
private static AnalogInput in_port = new AnalogInput(Pins.GPIO_PIN_A0);
private static OutputPort _0led = new OutputPort(Pins.GPIO_PIN_D0, false);
private static OutputPort _1led = new OutputPort(Pins.GPIO_PIN_D1, false);
#endregion

#region bias removal stuff
private static int previous;
private static int flip_flop = 0;
#endregion

private static int calibration_value = 0;
private static string byt = "";

private static void Loop()
{
int d = in_port.Read();

if (d > calibration_value)
{
ExclusiveOr(1);
//byt = byt + "1";
_0led.Write(true);
Thread.Sleep(30);
_0led.Write(false);
}
else
{
ExclusiveOr(0);
//byt = byt + "0";
_1led.Write(true);
Thread.Sleep(30);
_1led.Write(false);
}

if (byt.Length == bits)
{
Output();
}
}
private static void Calibrate()
{
DateTime start = DateTime.Now;
TimeSpan ts = DateTime.Now - start;
int count = 0;
int total = 0;
//Making this a bit lager may, or may not, provide better results.
while (count < 10000)
{
total += in_port.Read();
count++;
//ts = DateTime.Now - start;
}
calibration_value = total / count;
BlinkLed();
return;
}
private static void vonNeumann(int input)
{
if (input != previous)
{
byt = byt + input.ToString();
previous = (in_port.Read() > calibration_value) ? 0 : 1;
}
}
private static void ExclusiveOr(int input)
{
flip_flop = (flip_flop == 1) ? 0 : 1;
byt = byt + (flip_flop ^ input).ToString();
}
private static void BlinkLed()
{
led.Write(true);
Thread.Sleep(30);
led.Write(false);
}

public static void Main()
{
Calibrate();
//setup Von Neumann
previous = (in_port.Read() > calibration_value) ? 1 : 0;
Debug.Print("Loop");
while(true)
Loop();
}
private static void Output()
{
//Dont have an LCD to output to yet...
Debug.Print(byt + " = " + ParseBinary(byt).ToString());
byt = "";
}
public static long ParseBinary(string input)
{
//Thanks to Jon Skeet for this one http://stackoverflow.com/questions/4281649/convert-binary-to-int-without-convert-toint32/4282972#4282972

// Count up instead of down - it doesn't matter which way you do it
long output = 0;
for (int i = 0; i < input.Length; i++)
{
if (input[i] == '1')
{
output |= 1 << (input.Length - i - 1);
}
}
return output;
}
}
}

The first thing we do is to calibrate the device, because as the device ages the output can drift a bit. We read the values from the ADC 10,000 times and average the readings, which gives us our middle point, anytime we read a value higher than this, we will make it a 1, and when we read lower it is a 0.  We can use this to create either numbers or letters, since its simply a binary stream. It is a good idea to have some filtering of the output and I have implemented 2 methods to do that vonNeumann and ExclusiveOr. The vonNeumann provides the best results, however I find that in most cases I don’t need to use it. Your results will vary based on how noisy your environment is, I find that when I head out on the lake the numbers are far more stable then when I’m at home or even just in town. I could imagine that around large motors (such as those in factories) you will find more noise in your output.

The PCB that resulted from this design is available for purchase,I may make up a quick kit and make that available.

Today we have lost one of the greatest visionaries the computing world has ever known, it was announced that Steve Jobs, co-founder of Apple, has died.  Trying to summarize just how Steve helped to shape the technological world we live in today is a task that in the coming days many will attempt, it is no easy feat.

 

“Steve, thank you for being a mentor and a friend. Thanks for showing that what you build can change the world. I will miss you.” – Mark Zuckerberg

 

After dropping out of college Steve Jobs and his partner, Steve Wozniak, founded “Apple Computer” which is now the second largest company in the world.  Whether you like Apple and its products or not is irrelevant, Steve Jobs help to build the personal computer industry into the Goliath it is today.  While Steve had a reputation as a bit eccentric he was well respected, this is most clearly evidenced in my mind, by the words that his long time business rival Bill Gates had to say as the news broke.

 

“I’m truly saddened to learn of Steve Jobs’ death. Melinda and I extend our sincere condolences to his family and friends, and to everyone Steve has touched through his work. Steve and I first met nearly 30 years ago, and have been colleagues, competitors and friends over the course of more than half our lives. The world rarely sees someone who has had the profound impact Steve has had, the effects of which will be felt for many generations to come. For those of us lucky enough to get to work with him, it’s been an insanely great honor. I will miss Steve immensely.”  – Bill Gates

 

Steve’s influence stretched so far that today companies, who currently compete with Apple from Microsoft to Google, paid their respects, Google went so far as to post on their homepage.  The Wired (wired.com) homepage closely resembled Apple’s, with a picture of Steve, and quotes from some of the biggest names in tech.

 

I have often aspired to be half as successful as Steve Jobs, and while I have never had the pleasure of meeting him, I cannot help but to share the sentiment that so many have expressed today, Steve will be missed.

 

“I am very, very sad to hear the news about Steve. He was a great man with incredible achievements and amazing brilliance. He always seemed to be able to say in very few words what you actually should have been thinking before you thought it. His focus on the user experience above all else has always been an inspiration to me. He was very kind to reach out to me as I became CEO of Google and spend time offering his advice and knowledge even though he was not at all well. My thoughts are with his family and the whole Apple family.” — Larry Page (Co-Founder of Google)

One of the great advantages of some languages, such as JavaScript and Ruby, is the ability to ‘monkey-patch’ or extending the built-in classes. Often it is used to the extend built-in types, this has the great advantage of allowing us to call methods of objects in the native format object.method(). I’ve often found it frustrating that JavaScript doesn’t offer a ‘contains()’ method for strings, this is easy enough to fix with some patching:

//Check to ensure that the method doesn't exist
if(typeof String.prototype.contains !== 'function')
{
 //I'm going to add a method to the String object's prototype
 String.prototype.contains = function (s)
 {
//indexOf(s) returns -1 if the given string (s) does not exist in the string it's called against.
if(this.indexOf(s) != -1){ return true; }else{ return false; }
 };
}

Now I have a method called ‘contains’ on all of my strings, which means that I can call it like

var foo = "Hello World!";
if(foo.contains("Goodbye"))
{
 alert("ByeBye");
}

This has some advantages, mostly just readability, but it also comes with some drawbacks. If you are using an external library you may be changing the functionality that the library expects. This makes the need for

if(typeof String.prototype.contains !== 'function')

an absolute must, however if the library plays nicely you may still be changing it expected behaviour. I have a few others that I frequently include in my standard library:

if(typeof String.prototype.contains !== 'function')
{
 String.prototype.contains = function (s){
if(this.indexOf(s) != -1){ return true; }else{ return false; }
 };
}

if(typeof String.prototype.trimLeft !== 'function')
{
 String.prototype.trimLeft = function ()
 {
return this.replace(/^\s*?[^\s]/img, '$1');
 };
}

if(typeof String.prototype.trimRight !== 'function')
{
 String.prototype.trimRight = function ()
 {
return this.replace(/[^\s]\s*?$/img, '$1');
 };
}

if(typeof String.prototype.trim !== 'function')
{
 String.prototype.trim = function ()
 {
return this.trimLeft().trimRight();
 };
}

//These two are good for some Google Maps work.
if (typeof Number.prototype.toRad !== 'function')
{
 Number.prototype.toRad = function ()
 {
return this * Math.PI / 180;
 }
}
if (typeof Number.prototype.toDeg !== 'function')
{
 Number.prototype.toDeg = function ()
 {
return this / Math.PI * 180;
 }
}