Friday, July 31, 2009

Server builds

While I'm chugging away trying to get OpenSolaris all that it can be, my friend was trying to get a server going with CentOS 5.3. One hiccup for him was trying to mount a HFS+ filesystem from his Macbook. The correct syntax to mount a HFS+ is
>mount -t hfsplus /dev/ //

ZFS, CIFS (Samba), OpenSolaris, and Windows Hanging out

I'm pretty pumped. I came home early from work this morning (went in at 4am, came home at 8am), and went to work on the server. First because the raid card wasn't supported in Solaris or OpenSolaris, and I couldn't find third party drivers for it, I moved the 4x 500GB Western Digital Caviar Blue harddrives to the onboard SATA controller. Next, I had some issues with Synergy yesterday where it would take some time to shift between the Windows box and the OpenSolaris box. Fresh out of ideas and sleep, I added in a second NIC which did nothing, as it wasn't supported. I then realized I forgot to turn off my desktop extension to include the second monitor on my Windows box, which would explain the extra "time" or movement necessary to get between the two monitors/systems. I'm with stupid.

I started up the system and tried to remember what to do. This site had some helpful information, but didn't work perfectly, so what I'm showing you is for OpenSolaris 2009.11 and may not work on other versions or even your install. First off, I tried to get the CIFS service working. I did these commands:
#rem_drv smbsrv
#pkg install SUNWsmbskr
#pkg install SUNWsmbs
#add_drv smbsrv
#svccfg import /var/svc/manifest/network/smb/server.xml

And then ran into issues. The smb service showed as running in maintenance mode. A quick google turned up nothing, so I remembered my issues last night and did the magical reboot. When the machine came back up, smb was right there with it. Hooray! I win a prize.

Next, I needed to attach to my workgroup

#svcadm enable -r smb/server
#smbadm join -w WORKGROUP

WORKGROUP is, well, my workgroup name. It connected happily now that I wasn't in maintenance mode anymore. The walkthrough said that there was issues with CIFS and Unix style passwords, so I changed the pam configuration a bit to fix that.

#echo other password required pam_smb passwd.so.1 nowarn >> /etc/pam.conf

After a quick vim to make sure it was truly there, I did

#passwd

That's not what my username is. That's a secret I care not to share. So I was all set! Time for some ZFS zaniness!

The walkthrough I was using directed me to access the ZFS gui at https://localhost:6879/zfs, which I immediately tried and failed at. After more googling, it appears that its not installed and I didn't want to deal with a gui anyways since I had seen a good walkthrough elsewhere without any gui. Enter Simon's Blog! This is what I did:

#format

I know had the list of my harddrives and their names.

#zpool create tank raidz2

I did raidz2 because I'm paranoid and built the file server because I hate losing data and didn't want to anymore. This created a new pool called tank, which is viewable by this command:

#zpool status tank

Also, to see raw data space, and true available space respectively,

#zpool list tank
#zfs list tank

Due to the face that I'm using raidz2, half of my space goes to parity/mirroring/magical protection. Next I set up my destination for my storage space.
#zfs create test/home
#zfs create test/home/media
A quick
#zfs list
showed the mounts to be available, so I moved to the directories and gave my user ownership with
#chown media
#chgrp staff media

Next, I shared it like no preschool teacher could've taught. (if you are a preschool teacher, I apologize and insist I meant no offense, its just where I was taught to share (preschool was where I was taught to share))

#zfs set sharesmb=on test/home/media

Next, find out what its named

#sharemgr show -vp

Then

#svcadm enable -r smb/server

And like that, my server was up and running. I was sceptical. It seemed easy, but why would it be? I switched to the Windows box, did a run \\\media, and BAM there came the sign in. For username, don't forget to put \ otherwise it will most likely try and sign you in as a local (local to the windows box) user. This isn't right. Anyways, after a second of soul searching Windows popped up a new file browser with my test file in it! I was hooked (up). Next, I mapped that baby to my Z: drive, and switched my "Documents", "Desktop", "Downloads", "Pictures", and "Music" folders all to the OpenSolaris box and waited while all of it transferred across. Hooray! Now when I save anything in my documents or pictures folders its replicated protected and error checked! No one will know otherwise! In a word, I'm super pumped. Unfortunately I'm limiting out at 10-ish MB/s transfers, but that's because my windows box NIC can't seem to convince my switch that it can run at gigabit, instead of 100mbit. An item for another day.

Thursday, July 30, 2009

Stubborn

I wasn't going to sleep until I got that damn driver working. I did get it working, the video I watched that showed the install skipped a step that was required for me. Get the driver here if you have an Atheros L1E or L2 networking card. This works for my Asus M4A78-E board. Next run these commands.

#tar xvf atge-2.6.5.tar.gz
#cd atge-2.6.5
#make install
#./adddrv.sh

Then reboot! I wasn't rebooting for the first few tries and was getting frustrated and then rebooted and when I got back in, my network monitor told me my new IP address! Success. I've got synergy going now too so I'm all set to start to set up my ZFS file system and sharing.

Damn these clocks

Further troubleshooting will have to wait. I'm going in to work at 4am tomorrow, so I'm going to bed for now. I'll be back at lunch time to troubleshoot and crunch. I did find some instructions that made it look relatively simple to install the driver, so I'm hopeful.

The pain and humiliation

I spent a good three days researching hardware. I wanted addon card raid, and then I wanted a quadcore 65W processer and I wanted it to be cheap. I thought I was watching for OpenSolaris drivers, but I was obviously not. I got distracted by price and having a second 16x PCI Express slot for my addon SATA raid card, which does not have an OpenSolaris driver (that I've found yet). The card in question, the Adaptec AAR-1430SA on paper seems pretty sweet, but I failed miserably by not checking drivers out first. I'm fully Linux supported and can get along that way, but I want ZFS! The motherboard I got, the Asus M4A78-E does everything fine except the network driver. Atheros L1E? What? This site has a driver that will work, and I'm currently trying to get it working. I can do without the addon card raid, but it would be nice to have. I've installed OpenSolaris 2009.11, and Centos 5.3 (dual boot) each without any trouble during install, just with the Solaris install requiring drivers for the SATA card and networking.

QR-Codes

If the image makes no sense to you, let me explain. First off, this isn't exactly Unix based in any way, its a fun toy I was introduced to today. Many cell phones today can get applications that read barcodes of all types, making it easy to shop around while in a store for a particular item. They also can be used to encode information that can be scanned by someone else. For instance, you could put a barcode on the back of your business card with your contact information. Someone could scan the barcode, and bam they have your contact information in their cell phone. Its crazy! Its genius! Why has it not caught on in America like it has elsewhere?

qrcode

Cable Management, Power Use

With the harddrives and SATA raid card arriving yesterday, I spent some time juggling hardware. I pulled my tv card out of my main workstation as well as the 500 GB harddrive I already had. I also swapped in a 750W power supply from the server, putting the 700W power uspply in the server since it will be on a whole lot more, and the computer seems much quieter. I'm not sure if the 750W power supply is quieter, or if its the dust I blew out, or the lower power draw, but the computer seems much quieter. In the server tower I spent some time trying to get the cabling really limited down so that heat buildup wouldn't ruin anything. The case looks pretty clean so far. The motherboard and CPU are waiting in the mailroom for me to pick them up so tonight I should have it fired up and hopefully ready to go.

Wednesday, July 29, 2009

Virtualization

Microsoft is a widely used operating system for a few very good reasons. First, Microsoft Office is hands down the best option for word processing in my opinion, and Outlook has some very nice features. Also, Active Directory is a relatively good way to keep track of a large amount of domain users. Because of these two things, I find myself forced to use Microsoft Windows and while I enjoy these aspects of it, I still love Linux and Unix and BSD and Solaris. For this I use Virtualization. Now virtualization is an up and coming technology, so there are a lot of changes being made. That being said, here is my experience with two seperate desktop hypervisors.

VMWare. Vmware offers a wide range of products, from desktop hypervisors to enterprises solutions for virtualizing large amounts of servers. They have two desktop products, VMWare Workstation and VMWare Server. VMWare Workstation is not free like Server, so I've used it only in trial settings for short periods of time. It installs as an application and you are able to configure, start, stop and everything else you'll do from this application. The Server version runs as a server, in that there isn't an application specifically for it, and others can connect to it from elsewhere. To access the interface, you use a web browser. The web browser is a bit clunky in my opinion, and I found myself right clicking far too often seeing as it isn't allowed. The virtual machines themselves are opened into the Virtual Machine Viewer, another piece of free software, and you can work with them there. Installs are at times frustrating, its almost impossible to get into the bios to change boot priorities, so if you've installed an OS its hard to install something else overtop of it. I also don't like using the web browser as it isn't always very quick. I used VMWare server for about a year and a half. I never had issues with compatibility, using USB devices on virtualized machines, installing lots of distros of linux and even some Windows installs, installing hacked Mac OS X onto it even.

Virtualbox. Virtualbox is a free solution by Sun Microsystems. It runs as an application and is quick and easy to install. My first install went very quickly, I liked how there was a obtainable POST screen so I could jump into the bios, and they had very intuitive controls for hardware additions and locating media already on the computer. The VMWare solution required the isos to be in the folder with the virtual machines, which at times was frustrating as I would never remember to move them there or didn't want to store them there. Virtualbox can grab them from anywhere and not move them either. Virtualbox does have a lot of pop ups that warn you about interfacing between the virtual machine and your computer, but after clicking the checkboxes for do not show they were out of my way. I've had no problems to date with the Virtualbox, but there have been some things that are slightly annoying. I wish I had more control over my virtual networks. While trying to get two virtual machines to only see each other, I found that the virtual network didn't allow for this. Also, running an ipconfig command on a windows box with Virtualbox throws up an extra six or so virtual interfaces. In its defense, VMWare isn't much better, but it is a little better I feel about how many it adds.

Virtualization is great for when you have a single system, and a desire for multiple operating systems. I've used it for testing software installs, operating system installs, and for live boot isos. Its quick and easy. VMWare makes products for both Windows and Linux and I've run it on both without much difference, and the Virtualbox is available for Windows, Solaris, and Linux.

Tuesday, July 28, 2009

Killing Time

After getting synergy up and working, and overheating my room while playing with OpenSolaris, I made a decision to order some new hardware. I should be getting some parts to set up a OpenSolaris NAS Server. Here is what I will have.

4x 500GB Western Digital Caviar Blue harddrives
4gigs DDR2-800 ECC Memory (I am building the server to keep my data clean, its worth the extra cost)
Phenom II X4 65W Processor (I wanted 65Watt quad core, and I wasn't trying to spend a lot)
160GB System Harddrive (SATA)

The parts should be here soon, and I will get my system together. In the meantime, I've been spending some time trying to get the DHCP server working. I'll walk through the entire process when the system is assembled.

OpenSolaris, first install on physical hardware

I have an old server from 2004-ish. It was a dual processer board with one socket used, and an outrageous 1gb of ram. It had a discrete graphics card, and a single gigabit Ethernet port. I don't' remember what its original purpose was, most likely I used it to host a gaming server and a statistics website for the gaming server. I had planned on upgrading it to dual processer with more memory, but that never happened. It spent most of its life not being used unfortunately. Going to college in the lower forty-eight meant it didn't make the cut and get shipped down. At least not for a while. The point is, though, that its an old server. And OpenSolaris installed without a stutter. It was just as easy as on the VM, but it did take a touch longer since it had to format the disk, which wasn't 8gigs this time. The server was put next to my main workstation, and one monitor was connected to it. At work, I've been using Synergy with Windows 7 and Minty Linux so I can use one keyboard and mouse for two systems without a KVM. If you've never heard of it, go to http://synergy2.sourceforge.com, its worth your time. Great piece of software. Anyways, I decided I would get Synergy up and running between my Vista x64 box and OpenSolaris and found a quick tutorial. Here are the steps to get Synergy installed, which I got from here. This was the first piece of software on Solaris, keep in mind, so I was blown away by the ease of install.

wget ftp://ftp.sunfreeware.com/pub/freeware/intel/10/libgcc-3.4.6-sol10-x86-local.gz

gunzip libgcc-3.4.6-sol10-x86-local.gz

pkgadd -G -d libgcc-3.4.6-sol10-x86-local

wget ftp://ftp.sunfreeware.com/pub/freeware/intel/10/synergy-1.3.1-sol10-x86-local.gz

gunzip synergy-1.3.1-sol10-x86-local.gz

pkgadd -G -d synergy-1.3.1-sol10-x86-local.gz

It was that easy. Because my Windows box served as the server for Synergy, I won't go into the config much. Details on that can be had at Synergy's website. This is the command you run to connect to the server:

/usr/local/bin/synergyc

Instant software KVM. I was very very pleased as this software install was extremely quick and easy, easier than installing it on Linux, although that could be because I had installed it on Linux before.

My physical install went quick and easy, and I had a system I could easily work on with my single keyboard and mouse. That was until the heat generated by the old Xeon box heated the office up to 85 degrees and stayed there, and the CPU fan picked up speed to help cool it. Back to the drawing board!

OpenSolaris, first install, first impressions

The Live Boot iso means that trying out OpenSolaris is easy, very easy. However, I wanted to really test out the "easy to use" ZFS set-up. I've been using VirtualBox, another Sun Microsystems product oddly enough, for a while and setup a new virtual machine to test out the OpenSolaris. I work in an office that does mostly server administration and support, and often I've heard the word Solaris thrown around as a curse word in the few occasions it is worked on, so I was skeptical. But right away, the familiar GRUB boot loader began, and then into the familiar Gnome desktop. First off, let me point out that I've tried a lot of distros of Linux, both KDE and Gnome, and I must say that the default desktop caught me off guard by how clean yet attractive it was. The default profile was eyecatching and new! What else could I ask for? To start the install there is a nice easy install icon shortcut on the desktop, like many recent live distros. The install was very quick, very easy, and before I knew it I was watching the bar as it installed itself. Granted I was using a virtual machine, and it was an iso image, the install was quick. A quick reboot and off I went to OpenSolaris-land.

I added four small drives to the virtual machine and within 5 minutes had what is called a raid-z2 array. It was really really easy. I was impressed. Next step was trying it out at home.

OpenSolaris 2009.06

The digital world has forced us to create digital solutions to digital problems. For instance, my other hobby, photography, now creates hundreds if not thousands of digital files, which are easily lost. To prevent this, I used to burn them to discs, either CDs or DVDs. But recently it has come out that the data on these burn discs can be corrupted in a shorter period of time than originally thought, some 10 years or so. Because of this, I began thinking about a file server. One with redundancy and lots of storage. I didn't want to worry about this stuff anymore. Enter Opensolaris.

OpenSolaris is the open source version of Solaris, an operating system made by Sun Microsystems for a long time. Sun used to be a hardware company disguised as a software company. There database software, MySQL, is a well known open source solution, as is OpenOffice, an open source alternative to Microsoft Office. Now, however, as many fringe architectures are falling apart thanks to the economy, they've branched out to the open source community. The OpenSolaris discs are available as Live Boot discs, from OpenSolaris.org, and are very friendly especially to those who have used any version of Linux, BSD, or Unix in the past.

My interest in OpenSolaris was peaked when reading an article on what type of raid to use in my upcoming file server, I learned about ZFS. ZFS is a file system developed by Sun Microsystems to be the end all of end alls. ZFS originally stood for Zettabyte File System. In case you were wondering, a zettabyte is 2^70 bytes. That is one billion (1,000,000,000) terabytes. The file system was designed to be usable from now until we as humans realize we have too much data. But I don't have this much data. What attracted me to ZFS is that it was designed for snapshots of data, continuous integrity checking, and automatic repair.

OpenSolaris makes it easy to get into ZFS, with the Solaris name to back it up. We'll see what I can get it to do for me.

For more information, visit http://www.opensolaris.org, or for more information on ZFS visit http://opensolaris.org/os/community/zfs/