[Home]LinuxBox/TechSupport

ec2-54-81-210-99.compute-1.amazonaws.com | ToothyWiki | LinuxBox | RecentChanges | Login | Webcomic

Questions tackled:





Inability to ssh anywhere


I have a... slightly convoluted setup, which probably doesn't help, of Ubuntu 11.10 running on Virtual Box on a Windows 7 host behind a corporate firewall. IT believe they have punched a hole for port 22 outbound from the host's IP address. I'm attempting to confirm that ssh is working by connecting to github and toothycat.net. Both attempts fail with "ssh_exchange_identification: Connection closed by remote host". Full debug log is below - it looks somewhat like it's trying to read the private key as a public one!

Is this likely to be caused by the firewall still not being correctly opened? I see we are reaching port 22. Do I need inbound open as well when ssh'ing out (seems unlikely)?

Otherwise, any ideas what might be going wrong? I'm going to crosspost to Ubuntu's StackExchange? site.

~$ ssh -Tvvv git@github.com
[OpenSSH 5]?.8p1 Debian-7ubuntu1, OpenSSL? 1.0.0e 6 Sep 2011
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to github.com [207.97.227.239] port 22.
debug1: Connection established.
debug3: Incorrect RSA1 identifier
debug3: Could not load "/home/chris/.ssh/id_rsa" as a RSA1 public key
debug2: key_type_from_name: unknown key type '-----BEGIN'
debug3: key_read: missing keytype
debug2: key_type_from_name: unknown key type 'Proc-Type:'
debug3: key_read: missing keytype
debug2: key_type_from_name: unknown key type 'DEK-Info:'
debug3: key_read: missing keytype
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug3: key_read: missing whitespace
debug2: key_type_from_name: unknown key type '-----END'
debug3: key_read: missing keytype
debug1: identity file /home/chris/.ssh/id_rsa type 1
debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-2048
debug1: Checking blacklist file /etc/ssh/blacklist.RSA-2048
debug1: identity file /home/chris/.ssh/id_rsa-cert type -1
debug1: identity file /home/chris/.ssh/id_dsa type -1
debug1: identity file /home/chris/.ssh/id_dsa-cert type -1
debug1: identity file /home/chris/.ssh/id_ecdsa type -1
debug1: identity file /home/chris/.ssh/id_ecdsa-cert type -1
ssh_exchange_identification: Connection closed by remote host

I suspect your local ssh client is not seeing a server response back. A successful session will produce much the same debug messages but without the connection being closed at the end. A correctly configured stateful firewall allows outbound packets to some set of ports on remote hosts, and also allows inbound packets back to the originating host/port (which will *not* be 22, and will generally be >1024) from any host to which it permitted outbound packets to be sent initially, from the port to which those packets were permitted to be sent. I'm guessing that second part is failing for you. You can confirm if this is what is going on by using telnet to connect to port 22; if packets are allowed in both directions, you should see the ssh banner:
  sl236@debian:~$ telnet www.toothycat.net 22
  Trying 89.16.173.239...
  Connected to www.toothycat.net.
  Escape character is '^]'.
  SSH-2.0-[OpenSSH 5]?.1p1 Debian-5
  ^]

  telnet> quit
  Connection closed.
  sl236@debian:~$

I did wonder if it might still be a problem with the firewall; IT don't seem terribly... competent at that sort of thing (they're probably quite good at administering a purely Windows network and Team Foundation Server). I'll relay the info to them on Monday and see if we can get somewhere - thanks. --CH


Is anyone else getting persistent and unsubtle attacks on their ssh server? I get people trying to guess the root password or log in to random other "default" accounts every now and again. My system is kept up to date, so I'm not really worried about them actually getting in, but it can be annoying, as it uses all my broadband and noticable CPU until I iptable them out of existence. --Admiral
Meh. Attack bots. I see maybe 1-2 sets of attempts an hour typically. - MoonShadow
Who seriously has an account on their box with the name "harrypotter" with a default password? --Admiral
That's amazing. I've got the same combination on my luggage! --RobHu
You may be interested in "fail2ban". Installed from the Debian package it does pretty-much exactly what I wanted - iptable out people who have too many authentication failures. --Admiral
However it doesn't seem to react at all to a good old-fashioned synflood, because none of the syn packets result in an actual authentication failure being logged for it to pick up. Luckily, Linux is pretty good nowadays at dealing with synfloods. I iptabled them off the world anyway, because they were being annoying. --Admiral
I just map ssh on my server to a non-standard port on my gateway. The automated attacks seem only to be attempting connections to well-known ports. --B


Compiling Network Drivers


We have a network boot server that whenever a new PC is booted, it loads an 8Mb Linux image which then installs a base image onto the pc. The new batch of PCs that we have has a new network card which doesn't have drivers in the current image. I can manage to add drivers into the image and that works fine, but they need to be compiled into a .ko file on a linux box before you can put them in. The network boot image runs kernel 2.6.5-7.244. I've installed a copy of SuSe? 9 (kernel 2.6.5-7.97) onto a spare machine to compile the drivers for the new network card on. The card is a Broadcom NetXtreme? Desktop Gigabit NIC. I downloaded the linux driver source code from the broadcom website, and extracted the tar file. I then changed to that directory and typed make, as per their instructions. It generates ~12000 errors, none of which seem to be any help in working out the problem. I'm thinking that I'm just missing some packages required to compile it, but I've no idea which ones.
--qqzm

Linux kernel source contains portions that are autogenerated based on configuration settings you supply. Kernel modules typically need a correctly configured kernel source tree in their include path in order to build at all; and the resulting binaries will usually only work correctly with the exact kernel version configured exactly the same way as what they were built against. You should find the broadcom driver makefiles have a way for you to tell them where the configured kernel source lives; usually a parameter you pass to make on the command line. It will probably default to /usr/src/linux, which in most distributions is a symlink to a directory tree containing the configured headers for the kernel binary package you last installed. If you have the configuration file for the kernel your network server uses, as well as the distribution-specific patches that were applied to its source when it was built (if any), you can retrieve the source for that kernel version, apply the patches, configure the source tree (make menuconfig, then use the appropriate menu option to import the config file that was used to build the old kernel) and build the drivers against it. Otherwise, you will need to configure and build a new kernel, which will be a world of trial-and-error pain if you don't off the top of your head know what device support you need to enable. - MoonShadow

Ay yes. I missed out a step in describing what I did. I have copied the source code tree from the kernel we have on the server into /usr/src and setup a soft link called linux to it. What I haven't done is copied the configuration file for the kernel into that folder. I'll try that. Thanks. --qqzm

I've copied linuxrc.config (I assume this is the config file I need?) from the root of the initrd file on the PXE server into the root of the kernel source code tree in /usr/src/linux-2.6.5-7.244/ (I assume this is the correct place to put it), and I still get the same several thousand errors when I try to compile the drivers. --qqzm
Uh, just copying the file isn't enough - you have to actually get the kernel's makefiles to process that file and generate all the autogenerated stuff. In /usr/src/linux-2.6.5-7.244/ do a make menuconfig and select the option to import the config file from the menu, then tell it you're done and it should go away and generate stuff. - MoonShadow
Ok, I've done that, and it appeared to work fine. It's made .config, .config.cmd and .config.old in the /usr/src/linux-2.6.5-7.244/ folder. However, it still does the same thing when I try to compile the drivers. --qqzm
Without access to your build environment I can't think of anything else to suggest, I'm afraid. Anyone else? - MoonShadow
Compile the configured kernel before trying to compile the modules? --Admiral
Still does the same thing. Thanks for your help guys. I'm going to get hold of one of the machines and install linux on it to see if I can get the card working at all (removing the whole network boot part from the problem). If not I'll get onto Broadcom tech support to see if they can shed any light. --qqzm
Might it be possible that the drivers are included in a later version of the kernel? In that case, it might be easiest to just download a whole up-to-date kernel and compile the lot in one place. It'd probably be well-integrated and should compile without problems. I doubt you will have many real problems using a more recent kernel in your boot image. --Admiral


NFS fileshare can be read but not written to



As a continuation of /SharingMyFiles I finally got enough of a RoundTuit to tackle it again and found the firewall stuff to not be as bad as I had expected.  There was a GUI and adding the port numbers for the portmapper, nfs and nmount made things start to work.

I seem to be having a permissions issue though in that while I can remotely copy files off the drive I can't copy files to the drive.  When I try to copy files to the drive it creates a zero-byte file and then comes up with the message "Cannot create regular file '...': Operation not permitted".

The drive I am mapping is FAT so I realise that permissions are different from normal but I can't understand why it would allow reading but not writing.

/etc/exports (on the server) reads:
 /mnt/winshare/media 192.168.0.0/255.255.255.0(rw,sync)

/etc/fstab (on the client) has the extra line:
 192.168.0.102:/mnt/winshare/media   /mnt/winslow  nfs  rw,hard,intr  0 0

Any thoughts? --K

A quick comment before I go to bed which might make this all trivial...  I've just noticed the permissions on /mnt/winslow are 755.  Does this matter? --K
Now fixed (permissions set to 777) but I still don't have the ability to write to the share.  Any ideas? --K

A catch with NFS is mapping UIDs and GIDs. By default, the NFS server will try to access the files under the same UID as the process attempting the access on the client. Root can be treated specially, and there are also options for mapping all requests to one UID. [cut]
I was hoping to avoid having to deal with this by virtue of the fact that the permissions on the drive are 777 so the incoming UID shouldn't matter.  I'll try setting this and see what happens. --K
Also, have you checked the file and directory permissions on the server? (For fat/vfat you can set the ownership and permissions for all the files on the filesystem in several mount options.) Finally, has your NFS server reloaded its configuration file? What's in '/proc/fs/nfs/exports'? --B
The relevant line in fstab is "/dev/sda4  /mnt/winshare  vfat  defaults,umask=000  0 0".  I haven't had any trouble using this drive on the server machine so I don't think the problem is there.  Both server and client are rebooted regularly so configuration files should be up to date.  Once the client has accessed the share, /proc/fs/nfs/exports contains the line "/mnt/winshare/media 192.168.0.0/255.255.255.0(rw,root_squash,sync,wdelay)". --K
Yeah, I think I know what might be causing this. FAT doesn't store owners for files, so all files are owned by root. You changed the mode of the directory to 777, which allows anybody to create a file and supposedly write to it. However, as soon as the file is created, it is owned by root, probably with permissions which don't allow anyone else to write to it. The setting root_squash means that your NFS client will be coming in as any user other than root. Maybe you need to set the mount options for /mnt/winshare to have everything owned by a mortal instead of root, and then perform all writes as that user. --Admiral



Drive read failures during boot-up



A large number of scary drive read errors occur during boot up, and appear in /var/log/dmesg as:

hda: command error: status=0x51 { DriveReady SeekComplete Error }
hda: command error: error=0x54 { AbortedCommand LastFailedSense=0x05 }
ide: failed opcode was: unknown
end_request: I/O error, dev hda, sector N
Buffer I/O error on device hda, logical block n


for many different values of n and N (N = n*4)

Answer: MoonShadow pointed out [this bug report] which lead me to check the DVD drive.  Turns out that having a music CD in the drive while booting causes these errors to occur (my DVD drive is hda since my hard drive is SATA).

Kylie Minogue Ate My Hamst^H^H^H^H^H Computer!



SynCE? on Debian Sarge


(PeterTaylor) I don't suppose anyone's got experience of SynCE? on Sarge, have they? I've installed synce-kde and used modconf to load the ipaq module, but I'm not sure whether I need to patch ipaq.ko. The [SynCE doc] seems to date from when Sarge was unstable, so I'm not sure whether it's still relevant. I tried making the patch, but it complains about a missing Makefile, so for the time being I'm trying without on the basis that SynCE? wouldn't be in stable if stable didn't also have the kernel support. I'm getting the following in /proc/bus/usb/devices:

 T:  Bus=01 Lev=01 Prnt=01 Port=00 Cnt=01 Dev#= 29 Spd=12  MxCh?= 0
D:  Ver= 1.01 Cls=ff(vend.) Sub=ff Prot=ff MxPS?=16 #Cfgs=  1
P:  Vendor=413c ProdID?=4003 Rev= 0.00
C:* #Ifs= 1 Cfg#= 1 Atr=c0 MxPwr?=  2mA
I:  If#= 0 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=ff Prot=ff Driver=(none)
E:  Ad=81(I) Atr=02(Bulk) MxPS?=  64 Ivl=0ms
E:  Ad=02(O) Atr=02(Bulk) MxPS?=  64 Ivl=0ms

The "Driver=(none)" is not very encouraging.

(PeterTaylor) Okay, I've moved on from that by reloading ipaq with options vendor=0x413c product=0x4003
(PeterTaylor) Note: you also have to install the package synce-serial to get the synce-serial-config script. And all is now working.



Anyone know why
 sed 's/\n//g'
doesn't work? --Rachael
sed doesn't see newlines, normally. If I need to process newlines, I do tr '\n' '`' | sed stuff | tr '`' '\n', where backtick is any character you know isn't in your source text. In this case you could just do tr -d '\n', though. --AC
Thanks, that works :)
Couldn't use tr -d because I was actually trying to replace a more complicated pattern which contained a newline within it, but was posting the simplified case on here.
Am now having further problems with maximal vs. minimal pattern matching, and have decided to use Perl instead. --Rachael


Mail servers 2


(PeterTaylor) Does anyone have exim using ntlworld for outgoing? I've just reconfigured to do that, so that vacation can actually send, and I get
 SMTP error from remote mailer after RCPT TO:<unused@pjt33.f2g.net>:
    host smtp.ntlworld.com [212.250.162.8]: 550 you are not allowed to send
mail to <unused@pjt33.f2g.net>
I could start running tcpdump to see precisely what's going on, but if someone has a working exim.conf I can look at, that might be quite a bit quicker.
(MoonShadow) While we don't use NTL, here's some things I've been bitten by before:



Mousewheel


PeterTaylor recently did an apt-get upgrade which broke his mousewheel support: he notes for future reference that in /etc/X11/XF86Config-4 the mouse should be configured thusly:

 Section "InputDevice?"
        Identifier      "Configured Mouse"
        Driver          "mouse"
        Option          "CorePointer?"
        Option          "Device"                "/dev/psaux"
        Option          "Protocol"              "IMPS/2"
        Option          "ZAxisMapping?"          "4 5"
EndSection?

where the protocol is the bit which debconf doesn't handle.

This is presumably for Linux 2.2 or 2.4 with a PS/2 mouse: in case it helps someone, for USB mice on Linux 2.4, or any mice (including real PS/2 ones) on Linux 2.6, the Device should be "/dev/input/mice" and the Protocol should be "IMPS/2". --SMcV



Symbolic Links 2


Another question about symbolic links:
If I have a symlink to a directory, how do I delete the symlink? rm doesn't work, and I didn't want to risk rm -f. (The workaround I used was to move the target, then rm the link, but for future reference...) Man pages don't appear very helpful on this subject.

rm should work on a symlink. It's just a file as far as rm is concerned. Check the permissions on the symlink, maybe?

Sorry. The point was that the directory was non-empty.

Shouldn't matter. Are you sure you're trying to delete a symlink and not a real directory? - MoonShadow
Ah. Found out what the problem is. rm CurrentJDK?/ complains that it's a directory. rm CurrentJDK? works fine. NoteToSelf? recorded.



Killing Pigs


Playing with command prompt stuff under Debian, ran 'pig' which, helpfully, translates anything you type in into pig latin...  I then accidentally press ctrl-z instead of ctrl-c and I now have a stopped job called pig...  How do I kill it?  Apparently I can't log out until I do... - Kazuhiko
Two obvious ways: either use ps to get its process ID and then kill -SIGKILL <pid>, or type fg to bring it to the foreground and then ctrl-c it.

 sl236@debian:~$ xpenguins &
sl236@debian:~$ ps
  PID TTY          TIME CMD
10128 pts/5    00:00:00 bash
10133 pts/5    00:00:00 xpenguins
10134 pts/5    00:00:00 ps
sl236@debian:~$ kill 10133
[1]+  Done                    xpenguins
sl236@debian:~$

By default, ps just lists the stuff you started in that session. ps a x will list everything.

ps -ux is probably the most normally used option - it lists all processes you own.  So whether you created them in this session or another one, they are there.

By default, kill asks the process very politely to please go away. kill -s 9 will forcibly garotte it.

DeathSpellNumberNine?.  Why the -s though?  And you can also aim kill (and other things) with % - kill %1 should kill stopped (or background) job number one.  --Vitenka
Note that this is a sh and bash feature, and may not work with other shells --Admiral

Can't believe I missed this.  'fg' command will bring a process back into the foreground.  With no arguments, it does what you want.  --Vitenka



Symbolic Links 1


*sigh*  Would anyone care to explain the wonder that is symbolic links?  I need one to link to a socket, which is pretending to be a file...  I think...

I have something pink called mysql.sock in /var/lib/mysql/ and I have an empty space in /tmp/ where, according to my web app, there should be a (presumably pink, but not necesarily) something called mysql.sock.  I have been told that 'symbolic links', conjured up by the command ln are the way to go, but I thought I would ask for confirmation here rather than have to re-install mysql from scratch again now I've actual built a database in it.
Hit man.  I can't remmeber which way round the target and desitination are, and that's always kinda important.  ln -s creates a new name which is a pointer to the real object.  Hardlinks are sorta the same but more dangerous and work at a lower level (and don't work on things that aren't really files, I  don't think)  However, if the webserver is expecting to see somehting in /tmp then it probably expects it to have put it there itself.  Whle creating it by hand would be a temporary workaround, it'd probably break next time you rebooted.  And my knowledge of mysql runs out a few seconds after I got frustrated with it and went back to flat text files.
Remember which way round the arguments go with "it's the same as cp" ie source destination --Mjb67

Thanks.  - Kazuhiko

ln -s /path/thing /path2/link creates a "symbolic link" file called link that is actually a pointer to thing. It is treated like thing unless you specifically tell *nix not to (for instance, tar and cp by default archive/copy the pointer rather than the content). You can do with it pretty much whatever you can do with thing, you can have as many of them as you like, yadda yadda. If thing moves or gets deleted, the symbolic link no longer points to anything and is invalid. You generally get no warning of this state of affairs until something tries to read thing via the symbolic link and blows up. You can do symbolic links across file systems. thing can be anything - a file, a device, a directory...
Win9x .lnk files are sort of like symbolic links except they're not really supported at the OS level - the application, rather than the OS, does the extra work necessary to open the content pointed to. Current versions of [Cygwin] support them as [proper symbolic links].

ln /path/thing /path2/link creates a "hard link" - a directory entry called link that points to thing. It is completely indistinguishable from the original directory entry for thing. It is also completely independent from it - you can move or delete the original and the hard link will still be valid. *nix maintains a reference count, and so will only delete the actual content of thing when all the directory entries pointing to it are gone. You can only do this on some filesystems (ones that have support for the reference counting), you can't do it across filesystems, and IIRC you can only do it to files - certainly not directories and probably not devices. Although, thinking about it, I don't see why not, since Linux devices are just a special kind of directory entry anyway. What other sorts of things that aren't really files are there? - MoonShadow
Umm.  I don't know offhand if a hardlink to a named pipe does the right thing, or a hardlink to a socket.  You used to be able to make hardlinks to directories if you were root - they're contraindicated now because they lead to an untree-like nature of the filesystem, which would confuse all sorts of stuff, although I don't think you can create loops easily.  mount -obind is recommended instead nowadays I believe. -- Senji
Hard links to pretty-much anything should work, as pretty-much anything on one of these filesystems is represented by an inode - directory entries just point to the inodes without conveying any additional information, so having a hard link means that multiple directory entries mention that particular inode number. In fact, directories have multiple links in normal operation, because each directory has an entry called ".", which points at itself, and all the subdirectories have an entry called "..", which points back, in addition to the directory entry in the directory's parent pointing at it. Hard links to directories are not recommended, because they muck up the ".." entry and confuse programs. --Admiral

Arigato minna-san...  I still have an error message but now its a different error message :)  Semi-Victory!!!  I didn't lose!  Semi-Victory!! - Kazuhiko

Ganbatte ne... - MoonShadow



Apache



Most common problem: Apache can't figure out the hostname for the IP address it is serving on. Try putting a line in /etc/hosts which has the server's IP address
followed by a space followed by a name for the server; e.g.

Was:

 127.0.0.1 localhost

Becomes:

 127.0.0.1 localhost
192.168.2.1 myserver

where myserver is the name in /etc/hostname


NTL cablemodems remember what they first talk to, and won't talk to anything else. Turn off everything, wait 4 hours to be sure, turn the modem back on, turn the Linux box back on and don't turn the Windows box on until Linux has made contact with the modem. Set up the Linux box to act as a gateway for the Windows box. Ideally, put the Windows box on a separate network (two ethernet cards in the Linux box) so it doesn't try to steal the modem every chance it gets.



Auto-Installers


Rant and help about semi-working auto-installers folded to Revision 64 and prior.



Apache Configuration


Colour me dopey, but I can't understand the helpfiles for the configuration files.
I want request to http://vitenka.com/ftpish to be proxied to http://192.168.1.51/ftpish (not visible to external network) - hiding my backend configuration.  RequestProxy? seems to be the way to go about it, but I can't see an obvious way to restrict that activity to only a few directories.  Any suggestions?  --Vitenka
I think I'm the dopey one here. Surely what you're after is a port forward from your internal network to the external IP address? - MoonShadow
Absolutely not.  I want *most* http requests to be served by the apache running on 192.168.1.50 (port forwarded to *.vitenka.com) - but those to specific directories to be forwarded on from there to the other internal machine.  --Vitenka (Well... I'd settle for getting smbmount to reliably remount whenever it is accessed - that would work too.)

Apache rewrite rules? - MoonShadow
ProxyPass?? -- Senji
Rewrite is one of those things I've avoided, since it looks powerful enough to create mandelbrots with.  Anything that cannot be done more simply using another tool is probably something that should be done with a scripting language IMHO.  Besides, won't that only work if the other server is visible to the client (which it isn't)?  I'd also like to have a custom 'that page not available, part of the cluster is offline' error message, but that's hardly vital.  As to ProxyPass? - uh, yes, that's probably the module I want to use - but I have no clue how to write the syntax, and the apache manual is less than ideally helpful.  --Vitenka
Ahh.. [zope's manual] is a bit more helpful.  Assuming that syntax still works with apache2.
Humm.  That will work for most of it, but it contains a problem which it uses a zope method (whatever zope is, some kind of scripting server I think) to fix.  The problem is that the proxied pages will have incorrect paths in - relative URLs will be fine, of course - but quite a lot will begin with / - which will need to be rewritten.  Is this where mod_rewrite comes in, and if so - haylp!  --Vitenka
Ok - ProxyPass? seems to work.  Problem is, to get it to work, I had to turn my machine into an open web proxy.  Now, I'm sure that isn't right - but I don't seem to have managed to lock it down to just those directories.  On the plus side, it makes a nice caching webproxy for me.  (The obvious next comment, use Deny from all; Allow from... won't work - I need to allow from all to these specified directories, and only allow from localnet everything else.)  Let's see... I don't want much, do I?  Next up is something to do traffic shaping, because bittorrent runs away madly - and the current mainline version does not support --max-upload for the curses version.  `I can probably fix it by using headless in some cunning way - but traffic shaping is a better long term solution.  Any recommendations anyone?  I just want the usual - guaranteed low bandwidth up and down for apache, maybe explicitly choke bittorrent and let everything else fight for the remainder, then let apache grab the rest if it's free.  There seem to be lots of apps out there to do it.  Which are good?  Or should I stick to ipchains, in which case what have they done to the configuration language?  (Last time I messed around it was called pfwadmin)  --Vitenka
iptables would be the way to go, as it can do all sorts of QoS? stuff (which ipchains can too, but not quite as prettily). A good starting point is the "wonder shaper" script, which I use, and can be found through google. --Admiral




Mail Servers


Ok, I can probably pick up enough docs to install and configure a mail server, sorta.

But how the heck do I test it, and make sure I've not left it as an OpenRelay?

Test it by sending mail to yourself from a Yahoo or Hotmail account created for the purpose. Test if it's an open relay either by telnetting to its SMTP port and typing appropriate SMTP commands by hand, or - more easily - by using [this] tool. Good luck - let us know how you get on! - MoonShadow

Argh! My ISP is threatening to cut me off for sending spam.  I don't even have an open port with a mailserver on it!  _but_ my HDD was doing the clappers this morning.  I may have been attacked somehow.  How do I test this?  (Of course, it might just be that my ISP is rsponding to faked headers)  --Vitenka

Ack - it looks like I have been attacked:
216.209.123.65 - - [20/Sep?/2003:10:38:18 +0100] "CONNECT 216.239.33.25:25 HTTP/1.0" 403 24
209.116.162.38 - - [20/Sep?/2003:10:43:19 +0100] "CONNECT mx1.hotmail.com:25 HTTP/1.0" 403 24

And so forth - but 403 means that permission has been denied, doesn't it?

Ah, here's the problem:
38.119.65.29 - - [15/Sep?/2003:21:14:56 -0400] "POST http://81.1.51.123:25/ HTTP/1.1" 200 116
38.119.65.29 - - [15/Sep?/2003:21:14:56 -0400] "POST http://81.1.51.123:25/ HTTP/1.1" 200 116 "-" "-"

That succeeded, I think.  Though it shouldn't have done.

Ok, I think it's fixed.  Damn evil people not allowing the use of public proxies.  Why can't we attack them, instead of having to cripple our services?  --Vitenka

Use ipchains or iptables for IP-based access control rather than the apache config file - it's much easier, more flexible and takes less resources.. - MoonShadow
By no means useful enough.  The world needs to be able to see my webserver, otherwise it's not good.  And the world nees to be able to reverse proxy through my webserver to my other webserver (which is not world visible)
But does your webserver need the capability to talk to port 25 of every machine in the world?  - MoonShadow
  That now all seems to work - but it's a shame that I can't have a full public http proxy.  I could set up a proxy just for me, on a different port - but even then it'd have to be password controlled rather than limited to certain IPs, since it'd mainly be for use when works network goes wonky - and then I can't guarantee IP address stability.  --Vitenka
Fair enough. Why a public proxy? - MoonShadow
Nope.  Fair point - I hadn't thought about firewalls to keep my LinuxBox in - I was kinda thinking that because it was just running servers I'd only have to worry about keeping intruders out.  Time to go learn whichever iteration of networking rules this kernel has, I guess.  A public proxy was thought of as being a nice friendly thing to run; since I make use of them every now and again.  --Vitenka



Samba stupdity


Got it:
Then you absolutely positively must set 'wins server' to the IP ADDRESS of your XP box.
With this configuration, a win98 machine should be able to see it too, as long as it is set not to log in to any domains. It may help to set the 98 server to use the same XP box for WINS resolution.



CUPS Overflowing



Great.  Linux saw my printer.  Linux now thinks the printer isn't there.  How do I force it to redetect?  (USB)  --Vitenka



CategoryComputing

ec2-54-81-210-99.compute-1.amazonaws.com | ToothyWiki | LinuxBox | RecentChanges | Login | Webcomic
Edit this page | View other revisions | Recently used referrers
Last edited March 30, 2012 9:12 pm (viewing revision 144, which is the newest) (diff)
Search: