OpenSUSE Linux Rants

OpenSUSE Linux Tips, tricks, how-tos, opinions, and news

My Resume  -  My LinkedIn Profile

July 16, 2009

SSH Attack Foghorn

by @ 6:20 am. Filed under bash, General Linux, Linux tips, ssh tips, sweet tools, Work-Related

I don’t like it when people try and hack my web servers. To make myself aware of people trying to access my ssh daemon, I wrote me a little script. Yup, I’m certainly aware of DenyHosts. Notwithstanding, in the hopes that this script may find use elsewhere, I post it here. Behold, enjoy, and chuckle a bit at how much better you could write it. Then, let me know how you’d improve it:

PATTERN="^"`date --date="1 minute ago" "+%b %e %H:%M:"`""
tail -n 1000 /var/log/messages | grep ""$PATTERN"" | grep sshd | grep -i "invalid user" | grep " from " > "$LOGFILE"
if [ $(stat -c%s "$LOGFILE") -gt 0 ] ; then
	echo "See the attached log for details" | mailx -a "$LOGFILE" -s "Possible hack attempt" YOUREMAIL@YOURDOMAIN.COM

Copy it to your /root folder. Name it something cool like ‘ssh_foghorn’, and chmod +x it to make it executable. Put it in your /etc/crontab file to run once every minute. Make sure you set the system log to whatever your distro uses. And change the email address to your own. Doesn’t cure cancer, but for 8 lines of code, it does what it needs to.

Again, I’m sure there are better ways to do this, so let’s hear ’em!

October 29, 2007

SUSE Admin Job Announcement

by @ 11:41 am. Filed under General SUSE, Work-Related

Job information is as follows. Please contact Jana directly.

Position: Sr. SUSE Linux Architect/ Analyst
Start Date: ASAP
Duration: 14+ Months
Rate: Negotiable
Location: Peoria, IL
Job ID: 10-5-OSTA

Provide technical expertise, consultation and guidance for the development, deployment, support, as well as performance tuning for Linux (SLES 9/10sp1) for the power and x86 processor chipsets.

PRIMARY RESPONSIBIBITIES include but not limited to:
The candidate will be expected to have experience with network based enterprise deployments of SUSE Linux enterprise server.
Must possess a deep technical knowledge of SUSE Linux Enterprise server and have hands-on experience in deploying and troubleshooting systems in a large scale enterprise environment
Must possess a deep technical knowledge and have hands-on experience with driver integration for SUSE Linux Enterprise Server for both Intel and Power based platforms
Must possess an expert level of knowledge for SUSE Linux Enterprise Server provisioning tools
Must possess excellent interpersonal and communication (both written and verbal) skills with the ability to interact effectively in a working team environment

Experience with provisioning enterprise Linux server builds using YaST or other automatic network based server build solution required
Experience with Bash shell scripting required
Experience with Setup and creation of a network deployment depot
Experience with implementing, configuring, maintaining and troubleshooting SUSE Linux Enterprise Server
Experience with System p Virtualization technologies (Power 5 + virtual I/O)

Special Requirements:
This individual will normally work first shift, although they need to be shift-flexible. They will be expected to work additional hours when required, including 2nd shift, 3rd shift, weekends and holidays. This individual will be expected to be on a rolling call list to provide after hours support. They will be asked to carry cell phone and/or pager.

The external associate will provide knowledge transfer to existing team members whenever possible. Recommended activities include:
Providing mentoring and guidance to team members on their areas of expertise
Attending and actively participating in Action in Review (AiR) and/or Action Oriented Debrief (AOD) sessions (Lessons Learned sessions)
Communicating lessons learned or best practices to team members
Submitting lessons learned or best practices for publishing within knowledge repository when the potential for reuse in other areas exists
Creating any documentation necessary to support, sustain, or reinforce knowledge sharing activities and ensuring it is stored where the team can access it if the associate transfers to another assignment. (ex: job aids, process and procedure, reference documentation, contact lists, background on strategy decisions, etc.)

For more information, please contact:

Jana Schoberg
Resource Manager/ Technical Recruiter
WiseChoice IT
414.773.0670 – phone
414.607.2066 – fax – email – website – LinkedIn

June 7, 2007

Job announcement for True North Academy

by @ 9:21 am. Filed under Work-Related

True North Academy, an American Fork-based financial coaching company, seeks a qualified LAMP programmer. This is a full time position with a fast growing, well established local company. The position is responsible for PHP systems analysis, systems design, coding, testing, debugging, documenting, as well as MYSQL database design and management. The candidate will also be responsible for implementing internal and external web software solutions. The person filling this position will be expected to work closely with other members of the IT team to create, maintain, and enhance client-server and web-based systems.

Key Qualifications:

• 4 year degree in Information Systems, Computer Science or related field
• 3-4 year experience in working with PHP and MySQL in an enterprise-class environment.
• Experience working with large OOP-based PHP applications.
• HTML, CSS, JavaScript, XML experience required
• Experience with AJAX is preferred
• Knowledge of object-oriented design and implementation principles required
• Knowledge of Linux system administration, especially openSUSE Linux, is a plus
• Postgres experience is preferred

Candidate must have excellent written and communication skills, be willing to work in a team environment, possess strong interpersonal skills along with problem-solving skills and creativity. Candidate must also have the ability to manage multiple projects and set priorities appropriately, and be highly motivated and willing to go the extra mile to ensure quality, functionality, and scalability.

Salary offered begins at 65k. A candidate with greater experience will receieve greater compensation.

Send all applications to smorris –nospam– at

May 8, 2007

The Weekend From Hell

by @ 9:04 pm. Filed under Work-Related

This blog post could easily be titled, “The Weekend I Got Kicked In the Face.” Since I didn’t, though, that may be a bit misleading. It just feels like it. Heh.

It started Friday when I had my car in the shop all day getting my new 4.10 gears put into my rear differential on my new Crown Victoria Police Interceptor. Right at noon, our production server went offline. I was told at 3:30 that if the production server was not back online at 5:00, that we would be rebuilding it on a new server. I’m a firm believer in colocation, now. Our managed hosting provider did not like something we did, and yanked our account.

At 4:30, the guy called me telling me my car was finished. My friend Jason raced me to the shop to pick it up. We were also preparing ourselves to spend the next 24 hours rebuilding a server. When I got my car, they told me that my rear brakes were shot, along with the rotors and emergency brake shoes. I could have them fix it for a mere $460. Turns out, the next morning I spent $410 on parts, and not only redid the rear brakes, but changed the front pads as well. Ceramic brake pads all around. It’s the only way to go if you ask me.

The only problem is that this took until 1:30 AM Sunday morning. At around 7:00 Saturday night, my work began calling me, asking when I could come in and rebuild the server. I told them that my car was immobile at the moment. You may ask why, if I knew my work needed me, would I change my brakes, thusly incapacitating myself from being able to attend to the needs of my employer.

Well see, the plot thickens. We have three cars, one of which (the 1999 Crown Victoria LX) is out of commission, having a blown engine. Next weekend, we will be getting a new engine put into that particular vehicle. I just can’t seem to part with it. I’ve had it since it had 19,000 miles on it, and the engine just went out at 152,000. It’s all good, because the new engine only has 49,000 on it. Anyway, that one will get some help next weekend.

That leaves us down to two available cars. Not quite that simple, however. We have to sell one of these remaining cars (1995 Honda Civic LX) to get enough money to fix up the other two (’04 Crown Vic and ’99 Crown Vic). $600 to put in the new engine, a few hundred more to get it into tip-top shape, and another few hundred to wrap up some repairs to the ’04 (rack & pinion steering hose, exhaust leak in the EGR tube).

Thursday night at 11:00 PM, I put the Honda up on and by Friday at 7:00 PM, I had $200 in my hand to hold the car until Monday. The new owner would come by and give us the remaining $2800 then.

The inference here is because of all this, I had until Monday to get some good brakes into my ’04 Crown Vic. It would stand as the only car in which my family would have the means to travel, probably for a couple of weeks. Thus, the brakes were to be done on Saturday, taking the gamble that I would either not need to go in to rebuild the server, or that they could wait long enough for me to get done with my brakes, and I wouldn’t lose my job. As luck would have it, I have a great place of employment, and they were very understanding.

Fast-forward to Sunday morning at 1:30 AM, when I was putting the last wheel cover back on after all the brakes were done. I called my work back and told them I would be in by 2:00. I took a quick shower, threw some fresh clothes on, and jumped into the car with the brand-spanking-new brakes, hoping that I did it right. With some fair amount of luck, I made it to work, where I spent the next 8 hours setting up the LAMP stack, installing PHPMyAdmin, restoring code and database backups to the new server, and setting up an email server. Just as I was finishing my final touches to the email server, my wife called.

She was going into labor.

Again, I was on a mega-short time crunch. I finished the email server and typed out some instructions for the techs who would be in the next shift, finishing setting up the server. I then shot home like a rocket. Sometimes, it does help to drive a car that really really looks like a cop car (boy, you should see people fly out of my way on the freeway). I got home in no time.

We packed up everything and dropped Azzie off at her grandma’s house. Hospital-bound, we set out.

At 2:09 that afternoon, my son Evan was born (8 lbs 7 oz, 20 1/2″ long). After we were relocated to a recovery room, my body realized it hadn’t been asleep since the morning of the previous day. I began fading at roughly the same rate as if someone was hitting me in the head repeatedly with an aluminum baseball bat.

I haven’t been kicked in the face (at least at any time during the past few days), but boy, it sure feels like it. Let’s just say this past weekend was the longest three weeks of my life.

March 13, 2007

One Step Closer to Linux Domination

by @ 7:03 am. Filed under How-To, SUSE Tips & Tricks, Work-Related

Yeah, I’m still working through the semester here at school. I sure don’t care for it much. That’s enough about school.

Alrighty, so the other day, I was out picking up my brother who was without wheels because he had just sold his truck. My boss calls my cell phone. His first words were, “Are you in the building?” You see, this is not some inside joke about Elvis. He only says that when something is very wrong, because he wants to know how many seconds it will be before it will be fixed. He wants to get an estimate of the number of feet I will have to travel before I can address whatever exploded.

Unfortunately at that precise moment, my answer was “No.” He said, “How long will it take you to get back?” At this point, I was wondering what possibly could have gone wrong in the 4 1/2 seconds it had been since had left to pick up my bro. Was it the bandwidth that I was taking up downloading all of the CDs simultaneously of the alpha release of openSUSE 10.3? I casually countered, “Why, what is going on?”

He said, “Our DHCP server has gone out on the SonicWall firewall, and we need one up as soon as humanly possible.” I said, “OK, I will be back in three minutes.” I would bet you lunch that this was about 170 seconds more than he wanted to wait, but he said, “OK, just get back as soon as possible.” I assured him that I would do everything possible to bend actual spacetime in such a way that I could get back before I left (and maybe even hold the door open for myself as I was leaving the building, but I didn’t remember myself having done that as I was walking out, so I didn’t think that I actually would be able to. As it turns out, I couldn’t, which was really disappointing).

My brother and myself immediately headed back to the building (he works with me). As I was walking in the door, I didn’t walk out of the building, which is how I know I wasn’t actually able to go back in time. I did call my boss, however, to let him know that I was embarking on the mission to reassemble the network. As it was, no one could get an IP, which left a lot of Windows users with that confused look that they get when stuff doesn’t “Just Work”™. Well, we didn’t want their heads to explode, so I grabbed my SUSE CDs and headed into the server room.

I pulled up YAST, installed the DHCP server, and turned that baby on. In the time it has taken you to read this much of my story, I had the company network back up. Let’s hear it for Linux saving the day, yet again. I went down to my desk and set up a few static IP addresses from there for some of our servers. This is also super easy. Just edit /etc/dhcpd.conf. Don’t change any of the stuff at the top, but add host entries to it according to this format:

host [HOSTNAME] {
  hardware ethernet [MAC ADDRESS];
  fixed-address [DESIRED IP ADDRESS];


Just change [HOSTNAME] with a description of the machine. Note that this does not make it resolve to that name, as in DNS style. It just gives you something to refer back to so that you can identify the machine for which it is set up. Also, swap out [MAC ADDRESS] for the (yep, you guessed it) MAC address of the NIC in the host for which you wish to set up a static IP. Then, where it says “[DESIRED IP ADDRESS]” you are going to put (you are exactly right) the IP you wish to assign to that machine.

As an example, let’s call the machine FRED, and the MAC will be 00:24:EB:F1:88:8C, and the IP will be This is what you will put in there:

host fred {
  hardware ethernet 00:24:EB:F1:88:8C;


After you have it set up how you want, just restart the dhcp server:

[0014][scott@mybox:~]$  su
mail:/home/scott # /etc/init.d/dhcpd restart


It was just about that easy to get our entire network back up and running in less than 5 minutes.

I’m telling you, Linux is your friend.

Besides that, Dell now has a survey about how the community wants Dell to provide Linux : Everyone take it.

January 18, 2007

LAMP specialist required

by @ 2:09 pm. Filed under Work-Related

At my place of employment, we are looking for a PHP programmer who specializes in Linux (SUSE preferred). Initially, it is a contractor position, but could become a full-time position. We are based in American Fork, UT. We are growing too fast to keep up. We need someone who has exceptional skills with each component of the LAMP stack. We use a lot of object-oriented PHP as well. This is mainly a PHP developer position. Email me if you are interested. smorris — at — suseblog — dot — com.

School has kicked in full force, again, keeping me from posting much on my blog. It’s almost been up a year. Woot.

Alrighty, well, if you are a PHP wizard and could use some extra money, please let me know.

September 25, 2006

Making the megaraid module work with new kernels (HP NetRAID 1M/2M)

by @ 6:34 am. Filed under General SUSE, How-To, SUSE Tips & Tricks, Work-Related

How To Install SUSE 10.1 on a machine with a Hewlett-Packard NetRAID 1M/2M:

This is an experience that I had recently at work with an HP Netserver LP 2000r machine. My boss asked me to install SUSE 10.1 on it.

It might be worth it to note that any time I blow away a machine’s installation, I get some information from it before I do so. I’m glad I did in this case. I doubt I could have done this without it. The magic I use to conjure up this information is as follows:

# dmesg > dmesg.txt
# yast2 hwinfo
[save out the information into a text file]
# zcat /proc/config.gz > kernel_configuration.txt
# lsmod > lsmod.txt
# lspci -v > lspci-v.txt
# cp /boot/grub/menu.lst ./menu.lst
# cp /var/log/messages ./kernel-messages.txt

I just make a bzipped tarball of all these files and move them to another machine. That way, I know what modules were loaded, what kernel configurations were, and huge loads of other stuff, especially with the file created by exporting the hwinfo information. It is very fortunate for me that in this case, I did all of this.

I began by booting from the SUSE 10.1 CD 1 install disc. I got about 2 screens into the installer when the machine completely stopped responding. During the installation, you can press CTRL+ALT+F4 to see the debugging output and messages from the kernel. Reviewing the output, it was trying to access the Hard disc, which was now completely unavailable because the kernel module had crashed. I tried turning the machine off and doing it again, but the same thing happened.

A bit of investigation revealed that the Hewlett-Packard NetRAID 1M RAID controller that we are using in that box is supposed to use the megaraid_mbox kernel module as its driver. This kernel module is what was crashing. That’s when I hit Google to see what other information I could find.

After a little research, I found that the newer kernels have lost support for legacy megaraid controllers (which the NetRAID 1M certainly is). Extensive searching led me to a page that offered me a glimmer of hope. It contained a patch that would make even the new megaraid module support the legacy RAID controllers.

This patch is as follows:

--- kernel-source-2.6.11/drivers/scsi/megaraid.h	2005-03-01
23:38:09.000000000 -0800
+++	2005-07-05
10:05:44.000000000 -0700
@@ -84,6 +84,10 @@
 #define LSI_SUBSYS_VID			0x1000
 #define INTEL_SUBSYS_VID		0x8086

+/* Sub-System Device IDs */
+#define HP_NETRAID1M_SUBSYS_DID		0x60E7
+#define HP_NETRAID2M_SUBSYS_DID		0x60E8
 #define HBA_SIGNATURE	      		0x3344
 #define HBA_SIGNATURE_471	  	0xCCCC
 #define HBA_SIGNATURE_64BIT		0x029

--- kernel-source-2.6.11/drivers/scsi/megaraid.c	2005-06-16
08:06:21.000000000 -0700
+++	2005-07-05
10:06:39.000000000 -0700
@@ -5037,6 +5037,10 @@
 		PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0},
 		PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0},
 MODULE_DEVICE_TABLE(pci, megaraid_pci_tbl);

My next task was to get the module source patched, compiled and loaded into the kernel that the install CD uses.

The problem is that if you compile your own module, you have to compile it against the exact version of the source code that was used to create the kernel into which you want to load the module. You also have to compile it with the very same version of gcc that was used to compile that target kernel. I needed to find out the gcc version and the version of kernel source code used to make the kernel that the CD boots from.

I booted a different machine off the install CD. When I got to the install screen, I went to one of the virtual terminals with CTRL+ALT+F2. One command gave me all the information I needed:

# cat /proc/version

It said that we were using the kernel, and that this kernel was compiled with gcc 4.1.0.

Next, I set up a machine running that kernel, installed gcc 4.1.0, and installed the kernel-source RPM for that same kernel. On that machine I went into /usr/src/linux/drivers/scsi and manually edited the megaraid source, applying the changes specified in the patch. To play it safe, I also wanted to use the .config file that was used to build the kernel. Fortunately, this is located in the /boot directory. You can quickly get this file where it needs to be with this command:

# cp /boot/config- /usr/src/linux/.config

I was all set up. I then recompiled the whole kernel and modules using “make && make modules”.

When that finished 93 years later, I copied the patched megaraid files (megaraid.c and megaraid.h) and the new megaraid.ko kernel module onto my USB stick.

I then went back over to the Netserver machine with the NetRAID controller. I booted off the CD, again. To my dismay, I saw that the megaraid_mbox module had loaded, disallowing me to load the megaraid module that I had just created.

Again, I hit the research path. After awhile, I found a post somewhere that pointed me to a file right on my own hard drive. After reading /usr/share/doc/packages/hwinfo/README, I saw that it was indeed possible to disable probing of the hardware during boot time. I was going to have to turn off probing of my NetRAID 1M with the kernel parameter “hwprobe=-0104:101e:1960”. These numbers were fished out of all that info I gathered out of the box before I wiped it.

I found I also would need to use “noapic” or other things would not work properly.

Into the drive went CD 1 and I started up the machine. When grub came up, I typed in “noapic hwprobe=-0104:101e:1960” as my kernel parameters and away I went.

When I got to the SELECT LANGUAGE screen of the installer, I again pressed CTRL+ALT+F2 to get into a virtual terminal. I mounted my USB stick and copied my megaraid.ko module off it into the /modules directory. It went something like this:

# mkdir /usb
# cat /proc/partitions
(looking for the USB partition)
# mount /dev/sda1 /usb
(where /dev/sda1 is my USB partition)
# cp /usb/megaraid.ko /modules/

I took out the USB stick here so that the hard disk would be at /dev/sda. I also did this so that the installer would not try and mount my USB stick as some kind of Windows partition. At this point, I held my breath and ran the command that would solve my woes:

# modprobe megaraid

To my utter amazement, the module actually loaded right in without a problem. During a few previous attempts, I got an error like “-1 : Invalid module format”, which I hoped not to get again. As it turned out, everything went fine.

With ALT+F7, I was back at the installer screen. I went through the installer and rebooted the machine.

Bad news. The kernel that was installed was a bigsmp kernel, and could not use the megaraid module that I had compiled.

I started the installation over. This time, during the installation, I changed things a bit. I deselected the bigsmp kernel and instead selected the default one for installation. As it reached the point where it was going to reboot, I stopped it. There was one last thing that I needed to do before I restarted the machine.

When YAST installs a new machine, it puts the kernel into /boot along with an initial ram disk. This initial ram disk is used to get a minimalistic environment set up so that it can load kernel modules without having to have the filesystem available.

Knowing this, I mounted my root partition and copied the megaraid.ko module from /modules (I was still in the CD environment) over to the modules directory of the kernel installed on the hard disk. It went something like this:

# cat /proc/partitions
(I then found my root partition, made a mount point for it, and mounted it)
# mkdir /hdd
# mount /dev/sda1 /hdd
(where /dev/sda1 is my root partition)

I could then copy my megaraid.ko module to the appropriate place on the hard drive:

# cp /modules/megaraid.ko /hdd/libs/modules/

I copied the /etc/mtab file to the hard disk so I could chroot into it:

# cp /etc/mtab /hdd/etc/mtab
# chroot /hdd

I had to make sure that SUSE would put the megaraid module into the initial ram disk. This is done in the /etc/sysconfig/kernel text file:

# vim /etc/sysconfig/kernel

All I had to do was to make sure the INITRD_MODULES had “megaraid” in it:

INITRD_MODULES="megaraid serverworks sym53c8xx processor thermal fan reiserfs"

After saving and exiting, I then went about rebuilding the initial ram disk (hereafter referred to as ‘initrd’):

# rm /boot/initrd-
# mkinitrd -k vmlinuz- -i initrd-

I then went into /boot/grub/menu.lst to clean it up a bit. I basically had to take out the hwprobe kernel option. Evidently, it put in the “noapic hwprobe=-0104:101e:1960” options I had first entered when I first booted into the install disc.

I took a deep breath, took out the install disc and then rebooted the machine. To my utter thrill and excitement, the machine came right up, loading my patched megaraid module. After having gotten the “Invalid module format” error so many times previous to this, I was quite anxious. But it worked this time, so I was in great shape.

There was only one problem, though. I was now running a uniprocessor kernel on a machine with multiple processors, and the megaraid module that I had compiled was being rejected by that bigsmp kernel. I decided to have a go at making the patched megaraid module work with that bigsmp kernel.

The first thing I did was to install kernel-bigsmp- and kernel-source- That got me the right kernel and the right kernel source code. Now, I needed to install gcc and a few other things:

# yast2 -i gcc nfs-utils ncurses-devel

Fortunately, YAST installed the 4.1.0 version of gcc by default without much direction from me.

Now, I needed the .config file that was used to create the bigsmp kernel that I had just installed. This was accompished with the same method as I had previously done:

# cp /boot/config- /usr/src/linux/.config

I now had the correct bigsmp kernel installed, I had the right kernel source package installed, and I had the right version of gcc installed. I had copied over the .config file. One thing left to do: copy the patched module source into the kernel source tree.

I mounted my USB stick and copied the patched versions of the megaraid.c and megaraid.h files over into the kernel source tree, putting them into the /usr/src/linux/drivers/scsi/ directory.

All was ready. I changed into the kernel source root and went for it:

# cd /usr/src/linux
# make && make modules

38,283 hours later, the compile finished. I now had a version of the module that was compiled specifically for the version of the kernel that was now installed. I had to copy it to the right place:

# cp /usr/src/linux/drivers/scsi/megaraid.ko /lib/modules/

Again, I wanted to make sure that the /etc/sysconfig/kernel file was still going to include my megaraid module in the new initrd when I made one:

# vim /etc/sysconfig/kernel

Make sure the INITRD_MODULES has “megaraid” in it:

INITRD_MODULES="megaraid serverworks sym53c8xx processor thermal fan reiserfs"

I saved and quit, and made myself a new initrd:

# rm /boot/initrd-
# mkinitrd -k vmlinuz- -i  initrd-

Next, I checked /boot to make sure that vmlinuz was pointing to vmlinuz- and not something else. I also checked to make sure that initrd was pointing to initrd- and not something else.

# ll /boot

I checked really quickly to make sure that /boot/grub/menu.lst had an entry in it that would be using vmlinuz as the kernel and initrd as the ramdisk and that grub was configured to use that entry by default:

# cat /boot/grub/menu.lst

Everything checked out, so I rebooted the machine:

# shutdown -r now

To my joy and amazement, the megaraid module built for the bigsmp kernel worked like a charm. As a last measure, I went into /boot/grub/menu.lst and took out the “noapic” kernel option and rebooted. It came right backup without a hitch.

I now had a completely usable system where the kernel was the one provided by the kernel-bigsmp- package, and the initrd was changed a bit to give it my patched megaraid module. Heh, I just have to be extremely careful with the updates I do on that box.

If you know anyone that has a NetRAID 1M/2M RAID controller and wants to put SUSE 10.1 on it, this is how you make it happen. As a matter of fact, I even have available the compiled binaries for both these kernel versions and the patched source code so you can build it for other kernels as necessary.

Good times.

August 23, 2006

Reverse Tunneling with SUSE Linux 10.1 and SSH

by @ 6:21 am. Filed under General SUSE, How-To, ssh tips, SUSE Tips & Tricks, Work-Related

So now, we can’t have personal computers on the company network. This “protects against viruses being introduced to the company network,” was the explanation given to me. Nevermind that I am running Linux which isn’t susceptible to them, and certainly doesn’t perpetuate them. So my desktop is on the regular old class C company subnet (192.168.0.x), and my laptop has to be on the wireless network, which is on a completely separate subnet (192.168.1.x). Obviously, there is no way to route traffic between the two computers. So what do you do? Time to whip out the SSH tunnel, again. Only this time, it’s a reverse tunnel.

The idea before is that we are setting up a machine inside the network to forward traffic to a computer outside the network, which then sends it to somewhere else. This time, we are setting up a computer outside the network to forward traffic to a computer inside the network. Then, we just connect to that computer, and the traffic is automatically forwarded in to the other computer.

As in my example, I set up a tunnel between my desktop machine on the 192.168.0.x subnet and my server. I told the server to forward all SSH connections that hit port 10000 to port 22 on the desktop computer. Then, I just SSH in to my server from my laptop, and my request actually ends up at my desktop computer. Because I’m using KDE and fish://, it’s essentially just like browsing a network fileshare on a local subnet, because Linux can do stuff like that.

Sounds exciting, and indeed it is. If you’d like to read the tutorial I used to set this all up, head on over here. Fun stuff, baby.

August 18, 2006

SUSE Linux running Nagios is pretty cool

by @ 7:03 am. Filed under General SUSE, Work-Related

I have had an absolute blast today. I’ve always heard about how wickedly powerful Nagios is. However, I’ve also heard many versions of different horror stories about how people lost an arm to Nagios when they tried to install it. My friend Steve now has a glass eye from his ordeal with it.

Naturally, of course, not wanting to lose an arm or an eye, I’ve steered clear from Nagios for a while. That being said, my manager came to me the other day and asked me, “Is there anything open-source that can monitor our servers to let us know when things go down?” I was like, “So, now you need Linux to babysit your Windows servers lest they crash?” After the chiding, I said, “Why yes, there is a tool called Nagios.” He said, “How soon can you get it installed?” I said, “About 2 weeks.” The painful look he gave me was priceless. After gleaning as much enjoyment as I could from it, I said, “Just kidding… but probably a day or two,” really having no idea because of what I’d heard about the installation.

The thing installed in about 15 minutes. Big whoop. Then came the configuration of that bad boy. Heh, there’s where I’m guessing people hit the wall. For some reason, it seems to me that if you’ve had experience with object-oriented programming, and relational database experience, the config files kinda sorta seem to just make sense. They did for me, anyway. YMMV.

I also wasn’t able to get the notifications to work, so I wrote my own scripts and plugged them in. They work beautifully. I set up a PING monitor for my desktop machine. I then spent the next 20 minutes turning the machine off and back on to watch the monitor go from CRITICAL to OK and back. Boy, simple minds have simple pleasures. Maybe that’s why my brother grew up eating crayons.


I also couldn’t find a MySQL monitor, so I just used the check_tcp monitor to connect to port 3306 on the target machine. I realize that this does not actually run a query on the database to see if it is actually working properly. However, it will tell me if the server is not running. Maybe I’ll fix that later. For now, it looks good to the untrained eye.

OK, well, I set up like 27 monitors on about 6 different machines. Fun day, tell you what.

Really it was not nearly as painful as I had been led to believe. Perhaps it was the tutorials that I used to get me started. Maybe I’ll write my own (for some reason, I get a charge out of writin’ good, clean, helpful tutorials) for anyone who may find it useful. The two tutorials I used were located on a CoolSolutions page and Between the two of those articles, from the time I started until the time I was monitoring the local machine was about 25 minutes. Not too shabby.

Oh, don’t forget to install openssl-devel before you do this, otherwise you won’t be able to check your HTTPS servers (using the check_http plugin).

If you dig screenshots, here’s one of my Nagios install in action:

Nagios on Linux in action
Click for larger image

August 10, 2006

SUSE Linux 10.1 redeemed – problems caused by faulty hardware

by @ 7:01 pm. Filed under SUSE Blog News, SUSE News, Work-Related

The server story continues. Today, my manager told me to install Windows on that box that has been giving me so much trouble. After coming back in from being violently ill outside, I got to work. Immediately, the Windows 2000 disc asked me for RAID drivers. Sweet… no dice there. Windows fails again. I took that information to my manager who said to install Windows 2003 Server. After coming back in from being violently ill again, I installed Windows 2003 Server on that machine. About 4 minutes after it was finished, imagine my glee when I saw this on the monitor:

SUSE Linux 10.1 rides again


As it turns out, the machine is just a dying piece of junk. It would crash on the NIC drivers, then it would crash on the RAID drivers. Windows 2003 Server even crashed on the NTFS drivers.

My manager is giving me a different machine tomorrow so I can put SUSE 10.1 on that bad boy. Woots. Wish me luck.

I got such a kick out of that BSOD that I have added to my Windows Error Gallery. Go check it out.

August 9, 2006

SUSE 10.1 – Ashes, Ashes, we all fall *DOWN*!

by @ 6:20 pm. Filed under General SUSE, Work-Related

Server Update

I learned that the following patch was in the 2.6.17 kernel, a version of which was released as stable on Monday:

commit 57a62fed871eb2a95f296fe6c5c250ce21b81a79
Author: Markus Lidel 
Date:   Sat Jun 10 09:54:14 2006 -0700

    [PATCH] I2O: Bugfixes to get I2O working again
    From: Markus Lidel 
    - Fixed locking of struct i2o_exec_wait in Executive-OSM
    - Removed LCT Notify in i2o_exec_probe() which caused freeing memory and
      accessing freed memory during first enumeration of I2O devices
    - Added missing locking in i2o_exec_lct_notify()
    - removed put_device() of I2O controller in i2o_iop_remove() which caused
      the controller structure get freed to early
    - Fixed size of mempool in i2o_iop_alloc()
    - Fixed access to freed memory in i2o_msg_get()
    Signed-off-by: Markus Lidel 
    Signed-off-by: Andrew Morton 
    Signed-off-by: Linus Torvalds 

However, the freaking server is still crashing. I have tried using the i2o_block module or the dpt_i2o module on Kubuntu, Gentoo, SUSE 10.0, Knoppix, and SUSE 10.1. I’ve also tried using kernel versions 2.6.15 through 2.6.17 in the SUSE 10.1 install. All of these attempts have resulted in the same exact behavior: the server hangs at random. Dean contacted me (thanks, bro) with a suggestion to update the firmware and bios. As luck would have it, they were already at their latest versions.

Because I have the 2.6.17 kernel installed with the patch that was supposed to fix my problem, I’m beginning to wonder if the problem isn’t related to a hardware failure somewhere. I started a 24-hour memory test today to see if I could find a problem with the RAM. Any other suggestions for things that I could try?

By the way, for hardware diagnosis and tons of other cool tools, I recommend The Ultimate Boot CD. Anyone else have other suggestions?

August 8, 2006

SUSE Linux 10.1 on my server

by @ 6:48 am. Filed under General SUSE, SUSE Blog News, Work-Related

I wanted to thank everyone who has provided great feedback on the ebook that I released last Monday. As many of you know, the influx of HTTP requests took my server to its knees. This happened because my server has limited bandwidth. It could not fill all the requests fast enough, so everything bogged. When I limited the number of connections, everything normalized again. It’s all good, though. It’s good to know that there is that much interest. Hopefully, the ebook is helpful to everyone who wants to learn about how to use Linux.

I have been a bit silent since the release of the ebook. This is mainly because I am focusing on a problem I’m having with a server at work. It has an old Adaptec 2100S RAID controller. This is driven by either the i2o_block module or the dpt_i2o module. Evidently, in SUSE 10.0 (which is what I tried first), both modules load, causing a race condition. In 10.1, the i2o_block module is used. The problem is that when I use this module, the server randomly locks up. I did manage to grab this error during one of those lockups:

kernel BUG at include/linux/i2o.h:1074!
invalid opcode: 0000 [#1]
last sysfs file: /firmware/edd/int13_dev81/extensions
Modules linked in: ipv6 af_packet edd reiserfs loop dm_mod usbhid ide_cd cdrom i2c_piix4 i2c_core e1000 mii sworks_agp agpgart shpchp pci_hotplug ohci_hcd usbcore parport ext3 jbd processor i2o_block i2o_core serverworks ide_disk ide_core
CPU:	0
EIP:	0060:[]	Not tainted VLI
EFLAGS: 00210282	( #1)
EIP is at i2o_driver_dispatch+0x25/0x1a1 [i2o_core]
eax: 01ba0000 ebx: fffffffe ecx: dfcfec00 edx: 01b90000
esi: dfcfec00 edi: fffffffe ebp: 0000000b esp: c034bf38
ds: 007b es: 007b ss: 0068
Process swapper (pid: 0, threadinfo=c034a000 task=c02ef2c0)
Stack: <0>dfcfec00 00000068 c01277b2 fffffffe dfcfec00 000000b f884d62c
	c1b78840 00000000 c013ff8a c034bfa4 00000580 c0341380 0000000b c1b78840

Call Trace:
 [] do_timer+0x39/0x316
 [] i2o_pci_interrupt+0x22/0x3e [i2o_core]
 [] handle_IRQ_event+0x23/0x4c
 [] __do_IRQ+0x7e/0xd1
 [] do_IRQ+0x46/0x53
 [] common_interrupt+0x1a/0x20
 [] default_idle+0x0/0x55
 [] default_idle+0x2c/0x55
 [] cpu_idle+0x8e/0xa7
 [] start_kernel+0x2b5/0x2bb
Code: 20 75 de 5b 5e c3 55 57 89 d7 56 53 83 ec 0c 89 04 24 8b 90 a4 00 00 00 39 d7 72 0f j8b 0c 24 89 d0 03 81 a8 00 00 00 39 c7 72 08 <0f> 0b 32 04 45 ed 84 f8 8b 04 24 89 fe 29 d6 03 b0 a0 00 00 00
<0>Kernel panic - not syncing: Fatal exception in interrupt

What is funny is that the Kubuntu CD I booted into uses the dpt_i2o module rather than the i2o_block module. Because of this, what I decided to do was to force it to use the dpt_i2o module. Hopefully it won’t lock up as it’s compiling that kernel or making and installing the modules. If anyone has any other ideas on how to address this issue, I’m all ears.

June 26, 2006

Scott Morris: Bill Hilf No Threat To Linux on Desktop

by @ 6:59 am. Filed under General Linux, Linux News, My Opinion, War, Work-Related

This type of thing absolutely never ceases to amaze me. This person can’t possibly be so out of touch with reality that he actually believes what he is saying here. I mean, I’m sure he sat down with a room full of PR experts and actually brainstormed this stuff. That is the only plausible explanation. Obviously they know that anything they say reflects on Microsoft. Any opinion he expresses, then (especially having taken Martin Taylor’s place, who left this past week), you KNOW will have to be in absolute line with Microsoft’s standpoint, no matter what it is.

What am I talking about? I just read this li’l story about Mr. Bill Hilf, the guy who succeeded Martin Taylor this past week at M$. He claims that “Linux No Threat To Windows On Desktop.” Mr. Hilf, you must be new, there, tiger.

Alrighty, looks like it’s time again to bust out the baseball bat and take this guy one argument at a time. Lest anyone fall victim to what he is saying, I will explain why he is out of his freaking mind. Believe me, he is.

On that note, let’s get started:

“Linux isn’t a threat to Windows on the desktop and is losing steam on the server as customers separate the operating system from the development model, according to Microsoft’s chief platform strategist.”

“Bill Hilf, general manager of competitive strategy at Microsoft, said pundits have predicted for years that Linux will gain momentum on the desktop, but that won’t happen because of the complexity involved in delivering a tightly integrated and tested desktop product.”

Let’s see, you mean it is losing steam because Massachusetts has banned Microsoft Office? Because there have been more Linux-related expos this year than any other year? Because Microsoft now uses Linux in their wireless networks? Because the Belgian government has banned Microsoft Office? Because Munich, Germany migrated over 12,000 desktops to Linux? Because Venezuela is also switching to Linux? How about the adoption rate of Linux in China right now? What do you have to say about the heavy usage of Linux in the Armed Forces of the United States?

In my own research, I have found that governments in South America, such as Argentina, Brazil, Columbia, Chile, and Peru are switching. European governments making the switch include Kurdjali, Bulgaria; Munich, Germany; Bergen, Norway; Schools and Government Agencies in Italy; Moscow, Russia; the United Kingdom; Canary Islands; Denmark; Barcelona, Spain; Dundee, Scotland; Central Scotland Police; France; Iceland; Poland; and Portugal. In Asia, the big ones are China, India, Malaysia, Hong Kong, and the Philippines.

I’m not even going to list the countless banks that have started using Linux to protect their data. Well, I might mention the fact that China’s largest bank switched to Linux. Actually, I’ll just briefly mention that the Deutscher Investment Trust, a German bank, has also switched to Linux. There’s also the Venezuelan bank, Banco Mercantil (with which I actually had an account at one time), who has made the switch to Linux. And of course, there’s the Indian bank that switched to Linux. Yeah, fine, they’re mostly servers. At least it’s causing M$ to lose money.

Maybe Mr. Hilf missed all of those stories in his RSS feeds or something.

Man, one of us is not plugged into the real world. Wake up, Neo.

Here’s the other thing: He says in the above quote that “pundits have predicted for years that Linux will gain momentum on the desktop, but that won’t happen because of the complexity involved in delivering a tightly integrated and tested desktop product.” Again, you’re already behind, sport. There is at least one Linux platform that is already doing this stuff. I will address this in detail after I set straight the other fallacies put forth by Mr. Hilf.

I think it’s cute that he slips in, “I’ve been a Linux desktop user for a really long time.” Watching someone install it once doesn’t really count, man.

“‘The magic of open-source software is not the software. It has nothing to do with the code at all. Most open-source code is terribly inferior to commercial software code,’ Hilf said. ‘The magic is the community and how you interact and participate in a community and make development transparent enough that the community believes in you and trusts you.'”

The magic of this moment is in the generous amounts of ludicrous that this statement is. He’s right, though. The magic of open-source software is not the software; the penguin seduced, brainwashed, and hypnotized me into using Linux. Oh, wait, now I’m speaking like a Microsoftie. The magic is actually in the track record that Microsoft’s code has of being many different ways of inferior, the biggest one being in absolute lack of any kind of secure code, whatsoever, and only that when they were forced to. “Whoops, I hadn’t realized that our code was so horrible and crappy. I’ll have to look into that.” The translation of this is: “We don’t care how poorly-written, crappy, and insecure our code is as long as the ignorant users of Windows will keep on buying it from us. They don’t need to know that there are better options out there.”

Terribly inferior code, huh? Is that why Linux has a track record for having rock-solid security (which track record Windows has certainly not enjoyed)? Is that why so many governments, institutions, and companies world-wide (including Microsoft, by the way) has started using it for everything having to do with security? Is that why Windows has a track record of being more open than a lady of the evening working her favorite corner?

Perhaps the thing that absolutely blows my mind the most about the quote is the end, where he says, “The magic is the community and how you interact and participate in a community and make development transparent enough that the community believes in you and trusts you.” What gives Microsoft the most remote impression that they know anything about this? According to consumers, they are ranked almost at the very bottom in terms of how much people trust them. No one trusts Microsoft. Could that be because they’ve done everything they can to get peoples’ money rather than provide a quality product? Maybe it’s because they pull crap like this. The community believes and trusts Microsoft, huh? Not on your life.

“Hilf’s comments come as Novell and Red Hat market more advanced, integrated server operating systems and desktop products that compare more favorably with the Microsoft desktop than in prior years.”

You have absolutely no idea how true this is, and I’m gonna tell you how true it is in just a bit… after we explore how out of his mind this guy really is.

“And even though Linux may appear slick on the desktop, it can’t compete under the covers, Hilf said. Novell and Red Hat are trying to adopt Microsoft’s integration model, but the process of integrating system components and ensuring third-party applications and device drivers run well on the desktop–and testing all those scenarios–makes that task too cumbersome.”

Other than the fact that no one that I know of is trying to adopt Microsoft’s integration model. Are you? Is Ubuntu? Debian? Fedora? No hands. Hmmm…

I’m not even going to make a comment on Mr. Hilf’s comments about competing “under the covers,” especially where it involves Martin Taylor.

He’s backwards again in reference to his little bit about the device drivers. Here’s the funny thing: According to Steven J. Vaughan-Nichols, Linux actually supports more hardware than Vista does! Scratch your head, there, for a minute. Yes, apparently Linux now supports more hardware than Vista. Hmm… again, Mr. Hilf is apparently just a little wet behind the ears. Here’s a towel, chief.

“‘Vendors come in and buy piece parts, and they try to assemble a mini Microsoft development model. But who is going to test it? It’s the user,’ Hilf said. ‘The user tests and reports back bugs on the desktop. The end user doesn’t want to be a tester, unless they’re a developer. It’s extremely hard and complex.'”

So hard and complex that Microsoft has produced nothing but buggy and worthless code for the past 20 years, and even worse updates (the XP SP2 disaster comes to mind). So hard and complex that yes, it is the user that has had to deal with the brunt of this neglect. You are absolutely right: the end user does not want to be a tester. So why, for the past two decades, have you forced them to be exactly that?

“‘There was a ton of work and engineering put into the Win32 API. Why do people want to clone the Win32 API, like the WINE project?’ he added.”

Answer: Because Windows absolutely sucks. People are willing to do anything to run their favorite applications on a platform that doesn’t make them feel like they are rolling around naked in raw sewage.

“Hilf gave kudos to his predecessor, Martin Taylor, a Windows Live executive who left Microsoft this week, for developing the Redmond, Wash., company’s Get The Facts marketing campaign.”

Evidently, neither of them realize how fruitless that lame campaign was, and how little was accomplished by it. Maybe it’s not the campaign they are congratulating each other about, after all. Maybe it has to do with their competing under the covers.

“‘One thing Martin did before I started [with Microsoft] was to help centralize the company around a single way of thinking about this,’ Hilf said. ‘There were a lot of different people composing theories about what to do to compete against Linux and open source.'”

Well there you go, sport. If it were such a non-threat, why are you trying so hard to get rid of it? If Linux is absolutely no threat to Windows on the desktop, as you are claiming throughout this article, why in this world of ours would you and His Majesty Martin Taylor be so hell-bent on destroying it? Why would there be an entire department, and even your very position at Microsoft be created and designed to attack Linux in every conceivable way? Why has everyone been obsessing about ways to nuke Linux, as you, yourself say?

Must be more of a threat than you will admit.

Please spare the world the garbage, PR-campaign-style, FUD-perpetuating speeches. I realize that your employer forces you to say these things. I realize that if you want to keep your job, you have to say them. Just know that anyone with the ability to reason doesn’t buy it. Neither does anyone who has experience with Linux, or anyone who has ever had to clean up a corporate disaster because of a security hole in your bug-ridden software, or anyone who can add up the fact that a FREE download (Linux) is cheaper than your Windows + Office combo topping the scales at around $500. Hmm… I guess that includes most serious IT professionals.

Now, I’d like to discuss some more about why this guy needs a padded room in which he can enjoy his own little reality.

In short, there is already at least one distribution of Linux specifically designed to integrate seamlessly into a Windows-centric environment. I realize that there are others, but I have personal knowledge of at least one. Let me tell you a little bit about it.

First of all, if you are in an enterprise or SMB, what will you be looking for in a desktop operating system? You will want something that saves you money, maintains or increases employee productivity, is easy to admin, is secure, and it has to be something you can just drop into your existing environment.

How will it save money? Let’s look at a quick comparison.

What will most basic office desktops need? The operating system and some kind of Office suite. Windows + Office = about $500. This is per desktop. If you have 25 desktops, that is already $12,500 right out the window. You can purchase SUSE Linux Enterprise Desktop 10 for (don’t quote me) about $50 or so, give or take. You immediately drop from $12,500 to $1,250. Hmm… about 10% of the cost. Yep, looks like that saves us some money right there. No problems with that argument. It’s very obvious and twice as clear.

How about maintaining or increasing productivity?

There is the fact that the Office suite, OpenOffice, provides the ability to do just about anything that its Microsoft counterpart can do. It is intuitive and fairly straightforward. Even new users can be productive immediately. So there’s one way: OpenOffice provides the office production tools.

Let’s look at another way.

Many people, when they are trying to find some bit of information on their computer, cannot remember where they saved it or originally found it. This leads many users to waste untold amounts of time looking through email, web sites, documents, and other files to see if they can find it by hand.

Enter beagle, the desktop search tool. In SLED 10, you have the ability to pop up your beagle window, run your search, and have immediately available a list of results. Thus we see that time is saved, and productivity is increased.

So beagle is another example. Let’s look at one more.

You have the new desktop effects found in Xgl. Yeah, they look great. However, many of the visual effects actually make the operating system more intuitive and easy to use. They provide visual cues that help new users get comfortable with the system very quickly.

Alrighty, there are a few examples of equal or increased productivity. I could go on, but I have other stuff to cover.

Is it easy to admin?

SLED 10 works with SLES 10 to create situations where you can push updates out to the desktop machines. You can “batch admin” systems, something that has not been available before (correct me if I’m wrong, here) for a desktop. Novell has really thought about this. They are addressing these types of things. I’m not just saying that because I work there. In another week I won’t be.

SLED 10 will be the ultimate admin-friendly desktop. SLED 10 and SLES 10 were created from the same code base. Same YAST. Same basic system for all the machines. This means that it is easier to maintain for admins.

The next point is security, which has long since been a dead horse. Linux is more secure. Always has been.

Perhaps the point that Mr. Hilf is saying is that Linux does not play well in a Windows environment.

This is actually one if the very strongest points of SLED 10. How?

First of all, SLED 10 can work with things like Active Directory to authenticate users. You can also authenticate with LDAP. It integrates right into the authentication setup you already have.

SLED 10 can browse the Windows Network Neighborhood, and can appear as a Windows computer to other machines on the network. This means you can browse and share files and printers, just like you would do on your Windows machines.

Another way that SLED 10 works with your existing environment is via Evolution. The SLED 10 platform can connect to whatever messaging back-end you are using. It plugs into Exchange, for example.

There is also the fact that OpenOffice can read and save any Microsoft Office formatted document. This means that if you do have Office users that you work with, you can send them documents in Word and Excel formats.

I’m telling you, SLED 10 will be the first Linux platform ever that can truly address all of the issues to be considered when looking for a desktop platform. Gone are the days where the knee-jerk response is “OK, so how much do I need to fork over to you, Mr. Gates?” Only the old-school has-beens are still thinking this way. Join the new IT movement. The one where everyone sees Microsoft’s software for the trash that it is, and sees Linux for the value that it adds to the industry. At very least join us in the real world and think for yourself.

I’m not the only one saying this type of thing. Check out Neil McAllister’s article, SuSE Linux Enterprise Desktop 10. His review of SLED 10 demonstrates what I’m saying here.

June 22, 2006

The Distro Dance

by @ 7:26 pm. Filed under General Linux, Work-Related

As of Monday, June 3, 2006, I will no longer be working at Novell doing the SUSE Linux CoolSolutions stuff. I have accepted a position doing PHP and system administration on SUSE Linux Enterprise Server machines. I will also be receiving approximately a bunch more than I am now. You see, I can be bought. I have no shame. I am in it for the money. It’s all about capitalism; the American dream, baby.

As such, I am no longer absolutely bound under penalty of death by schoolbus to using ONLY SUSE Linux. A while back I told someone (seems like it was Hans Fugal) that if the time came that I wasn’t bound to SUSE Linux by way of my vocation, that I would try out Debian. Well, as I said I would, I am taking a look at it. After 3 days of jigdo downloading stuff, I finally have two DVD images. Dude, that is a ton of packages. Is there any other Linux distro that has more packages than Debian does? Wow.

Anyone else have any reason that I should try any other distributions while I’m at it? Don’t just randomly name Linux distros. I have a link to DistroWatch, too. I can just as easily go to that page and click the links on each distro. If you are going to make a suggestion, please tell me why I should look at that distro. The more details you give, the more likely I am to not delete the comment.

Yes, I’m in a bit of a mood.

May 16, 2006

Xgl on SUSE Linux 10.1

by @ 10:41 am. Filed under Work-Related, Xgl

Alrighty, here’s the deal: I was contacted by the folks at Novell. They said that the Xgl for SUSE 10.1 article was breaking peoples’ systems. So, they asked me to remove it. Ever since then, I’ve gotten all kinds of feedback asking where it went, when it will be back, and stuff like that. Due to popular demand, I put it back up, asking people to tell me which systems it breaks. I want to notify everyone who uses it. That way, they know whether they should use it or not. So please, if it works, let me know what systems it works on. If it doesn’t, let me know which systems it does not work on. That way, everyone can have a clear idea of what is going on and no one will be confused. At least about that. 🙂

Please also include the output of the ‘uname -a’ command. I’ll need to know what actual platform and version thereof that you are using.

OpenSUSE Linux Rants
Official OpenSUSE Linux Site

internal links:


SUSE Resources

search blog:


October 2022
« Feb    

75 queries. 0.263 seconds