Saturday, April 24, 2010

Setup sshd on cygwin in 4 minutes

4 steps, less than 4 minutes to do,
assuming: you have cygwin installed already, plus the requisite packages.
It took me less than twenty minutes (well, I had to perform some due diligence, even though I had done it before on another system. And the windows firewall issue).
All commands should be run in a bash shell unless directed otherwise.
  1. Check that you have cygrunsrv. The brute force method:
    $ cygrunsrv.exe -S sshd
    cygrunsrv: Error starting a service: OpenService: Win32 error 1060:
    The specified service does not exist as an installed service.
    If you get 'command not found', then you'll need to install the package.
  2. run ssh-host-config
    seemed to take a minute or so before I got any output, but maybe that's because my virtual machine didn't have much entropy for key generation.
    type 'yes' a couple times at the prompts, unless you don't want to.
  3. now install as a service:
    cygrunsrv.exe -S sshd
    when asked for value of CYGWIN, you can give:
    binmode tty ntsec

  4. start the service.. run services.msc (from a cmd window) or use 'net start'

All set?
Test from bash with 'ssh -v username@localhost'.
Now, test externally with 'ssh -v username@cygwinSshHost' .. if you don't see any output, windows firewall or other form of bandaid is keeping you out.

After you're logged in, do 'ssh-add -L >> ~/.ssh/authorized_keys2' so you don't need to type passwords to log in anymore (maybe on the second login, agent forwarding doesn't happen on a new host ?).

More steps, with more detail: How to configure cygwin for sshd .

Monday, April 19, 2010

Websphere Network Deployment, cluster creation: "Check the add node log for details"

[A search for "Check the add node log for details." returned very little, hence this entry..]

When adding the first node to a cluster,
all the configuration and applications are retrieved from it.
This may take a while. The output you see in the Node Deployment Manager console looks like this:
ADMU0001I: Begin federation of node node01 with Deployment Manager at mgrnode.company.org:8879.
ADMU0009I: Successfully connected to Deployment Manager Server: mgrnode.company.org:8879
ADMU0505I: Servers found in configuration:
ADMU0506I: Server name: server1
ADMU0506I: Server name: webserver01
ADMU0506I: Server name: WebSphere_Portal
ADMU2010I: Stopping all server processes for node node01
ADMU0510I: Server server1 is now STOPPED
ADMU0510I: Server WebSphere_Portal is now STOPPED
ADMU0024I: Deleting the old backup directory.
ADMU0015I: Backing up the original cell repository.
ADMU0012I: Creating Node Agent configuration for node: node01
ADMU0120I: isclite.ear will not be uploaded since it already exists in the target repository.
ADMU0120I: isclite.ear will not be uploaded since it already exists in the target repository.
ADMU0014I: Adding node node01 configuration to cell: mgrnodeCell01
The console has not received information on the add operation in a timely manner. The state of the operation is indeterminate. Check the add node log for details.
The addNode.log file is on the server you added to the cluster, in ...\WebSphere\wp_profile\logs\addNode.log.

If you were successful, you should see:
[4/19/10 7:41:10:166 EDT] 0000000b NodeSyncTask A ADMS0003I: The configuration synchronization completed successfully.
[4/19/10 7:41:10:368 EDT] 0000000a AdminTool A ADMU0018I: Launching Node Agent process for node: node01
[4/19/10 7:41:33:410 EDT] 0000000a AdminTool A ADMU0505I: Servers found in configuration:
[4/19/10 7:41:33:410 EDT] 0000000a AdminTool A ADMU0506I: Server name: nodeagent
[4/19/10 7:41:33:425 EDT] 0000000a AdminTool A ADMU0506I: Server name: server1
[4/19/10 7:41:33:441 EDT] 0000000a AdminTool A ADMU0506I: Server name: webserver01
[4/19/10 7:41:33:457 EDT] 0000000a AdminTool A ADMU0506I: Server name: WebSphere_Portal
[4/19/10 7:41:36:561 EDT] 0000000a AdminTool A ADMU9990I:
[4/19/10 7:41:36:655 EDT] 0000000a AdminTool A ADMU0308I: The node node01 and associated applications were successfully added to the mgrnodeCell01 cell.
[4/19/10 7:41:36:670 EDT] 0000000a AdminTool A ADMU9990I:
[4/19/10 7:41:36:686 EDT] 0000000a AdminTool A ADMU0306I: Note:
[4/19/10 7:41:36:701 EDT] 0000000a AdminTool A ADMU0302I: Any cell-level documents from the standalone mgrnodeCell01 configuration have not been migrated to the new cell\
.
[4/19/10 7:41:36:733 EDT] 0000000a AdminTool A ADMU0307I: You might want to:
[4/19/10 7:41:36:733 EDT] 0000000a AdminTool A ADMU0303I: Update the configuration on the mgrnodeCell01 Deployment Manager with values from the old cell-level documents.
[4/19/10 7:41:36:748 EDT] 0000000a AdminTool A ADMU9990I:
[4/19/10 7:41:36:764 EDT] 0000000a AdminTool A ADMU0003I: Node node01 has been successfully federated.
For me, with not a large amount of custom deployment, this occurred about 10 minutes after start.

The portal will not necessarily start up successfully after this, however... I got the error
com.ibm.wps.ac.DomainAdministratorNotFoundException: EJPSB0107E: Exception occurred while retrieving the identity of the domain admin ...
and only got 404 errors when trying to access the portal.
Problem: I missed a step.
[PDF] A Step-By-Step Guide to Configuring a WebSphere Portal v6.1.0.3 (WPv615ClusterGuide.pdf) -- wish I had found this document earlier.