from Bowtie mapping to gene names

classic Classic list List threaded Threaded
16 messages Options
Reply | Threaded
Open this post in threaded view
|

from Bowtie mapping to gene names

Daniel Elleder
Hi,

I have reads from Illumina paired-end run mapped against reference genome using Bowtie. Is there a Galaxy tool which would allow me to extract gene names from the mapped chromosomal regions? In this case I have cat genome, example of the output row is bellow.

Thanks for anymhelp,
Daniel

HWUSI-EAS610_110227_00028:5:1:7767:999#0	99	chrE2	27900093	255	82M	=	27900139
128
NACCTGTTATGTACTAAGAAGCTTATTCTCCCANNNNNNCTNNNNNNNNNCATATGTNGNGNNNNNNNNNNNNNNNNNNNAA	##################################################################################	XA:i:2	MD:Z:0A17T14T0G0C0T0G0A2C0A0T0C0A0A0A0G0G7G1T1A0C0C0C0C0T0T0A0G0T0G0C0C0C0G0T0G0A0T2	NM:i:38


------------------------------------------------

Daniel Elleder, Ph.D.
Postdoctoral fellow
Center for Infectious Disease Dynamics
Pennsylvania State University
613 Mueller Laboratory
University Park, PA 16802
tel: (814) 867-2122



___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

  http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

  http://lists.bx.psu.edu/
Reply | Threaded
Open this post in threaded view
|

Re: from Bowtie mapping to gene names

Anton Nekrutenko
Daniel:

To get a good idea on how Galaxy handles so called interval operations take a look at http://usegalaxy.org/galaxy101.
The answer to your question depends on what you would like to do. Are interested in obtaining the read coverage for a set of genes or simply identifying a set of reads mapping to a set of genes?

Thanks,

anton

On Apr 11, 2011, at 11:17 AM, Daniel Elleder wrote:

> Hi,
>
> I have reads from Illumina paired-end run mapped against reference genome using Bowtie. Is there a Galaxy tool which would allow me to extract gene names from the mapped chromosomal regions? In this case I have cat genome, example of the output row is bellow.
>
> Thanks for anymhelp,
> Daniel
>
> HWUSI-EAS610_110227_00028:5:1:7767:999#0 99 chrE2 27900093 255 82M = 27900139
> 128
> NACCTGTTATGTACTAAGAAGCTTATTCTCCCANNNNNNCTNNNNNNNNNCATATGTNGNGNNNNNNNNNNNNNNNNNNNAA ################################################################################## XA:i:2 MD:Z:0A17T14T0G0C0T0G0A2C0A0T0C0A0A0A0G0G7G1T1A0C0C0C0C0T0T0A0G0T0G0C0C0C0G0T0G0A0T2 NM:i:38
>
>
> ------------------------------------------------
>
> Daniel Elleder, Ph.D.
> Postdoctoral fellow
> Center for Infectious Disease Dynamics
> Pennsylvania State University
> 613 Mueller Laboratory
> University Park, PA 16802
> tel: (814) 867-2122
>
>
> ___________________________________________________________
> The Galaxy User list should be used for the discussion of
> Galaxy analysis and other features on the public server
> at usegalaxy.org.  Please keep all replies on the list by
> using "reply all" in your mail client.  For discussion of
> local Galaxy instances and the Galaxy source code, please
> use the Galaxy Development list:
>
>  http://lists.bx.psu.edu/listinfo/galaxy-dev
>
> To manage your subscriptions to this and other Galaxy lists,
> please use the interface at:
>
>  http://lists.bx.psu.edu/

Anton Nekrutenko
http://nekrut.bx.psu.edu
http://usegalaxy.org




___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

  http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

  http://lists.bx.psu.edu/
Reply | Threaded
Open this post in threaded view
|

Re: Help!!!!!! with Galaxy Cloud!!!!!

Mike Dufault
Hello Galaxy Staff,

My data has been running on the Amazon EC2 for just over 24hrs. I have not closed any windows and my Exome analysis made it all the way through to filter on Pile up. I have two tabs for this instance. One is the Galaxy Cloudman Console and the other is the tab where I perform the analysis, load data, history etc.

Anyway, I went to add a step to the work flow and the screen "Welcome Galaxy to the Cloud" screen along with the information "There is no Galaxy instance running on this host, or the Galaxy instance is not responding. To manage Galaxy on this host, please use the Cloud Console."

What happened???

When I go back to the Galaxy Cloudman Console, it shows that my instance is still running along with the four cores, the Cluster log is below. AWS also shows that my instance is running.

Will the work flow finish? Can I get my data? How?

I tried to re-access the analysis page by selecting "Access Galaxy" from the "Galaxy Cloudman Console" but it sends me to the same "Welcome page."

Is there a way to get back into the analysis page?

Please help!!!

Thanks,
Mike

The cluster log shows:
  • 13:05:24 - Master starting
  • 13:05:25 - Completed initial cluster configuration.
  • 13:05:33 - Starting service 'SGE'
  • 13:05:48 - Configuring SGE...
  • 13:05:56 - Successfully setup SGE; configuring SGE
  • 13:05:57 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Saved file 'cm_boot.py' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Saved file 'cm.tar.gz' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Problem connecting to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3', attempt 1/5
  • 13:05:59 - Saved file 'Fam122261.clusterName' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:06:24 - Initializing a 'Galaxy' cluster.
  • 13:06:24 - Retrieved file 'snaps.yaml' from bucket 'cloudman' to 'cm_snaps.yaml'.
  • 13:06:41 - Adding 3 instance(s)...
  • 13:07:02 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'universe_wsgi.ini.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'tool_conf.xml.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:48 - Error mounting file system '/mnt/galaxyData' from '/dev/sdg3', running command '/bin/mount /dev/sdg3 /mnt/galaxyData' returned code '32' and following stderr: 'mount: you must specify the filesystem type '
  • 13:07:52 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:52 - Starting service 'Postgres'
  • 13:07:52 - PostgreSQL data directory '/mnt/galaxyData/pgsql/data' does not exist (yet?)
  • 13:07:52 - Configuring PostgreSQL with a database for Galaxy...
  • 13:08:05 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:05 - Starting service 'Galaxy'
  • 13:08:05 - Galaxy daemon not running.
  • 13:08:05 - Galaxy service state changed from 'Starting' to 'Error'
  • 13:08:05 - Setting up Galaxy application
  • 13:08:05 - Retrieved file 'universe_wsgi.ini.cloud' from bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3' to '/mnt/galaxyTools/galaxy-central/universe_wsgi.ini'.
  • 13:08:05 - Retrieved file 'tool_conf.xml.cloud' from bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3' to '/mnt/galaxyTools/galaxy-central/tool_conf.xml'.
  • 13:08:05 - Retrieved file 'tool_data_table_conf.xml.cloud' from bucket 'cloudman' to '/mnt/galaxyTools/galaxy-central/tool_data_table_conf.xml.cloud'.
  • 13:08:05 - Starting Galaxy...
  • 13:08:09 - Galaxy service state changed from 'Error' to 'Starting'
  • 13:08:09 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:09 - Saved file 'tool_data_table_conf.xml.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:28 - Instance 'i-e46f0a8b' reported alive
  • 13:08:28 - Successfully generated root user's public key.
  • 13:08:28 - Sent master public key to worker instance 'i-e46f0a8b'.
  • 13:08:28 - Instance 'i-e26f0a8d' reported alive
  • 13:08:28 - Sent master public key to worker instance 'i-e26f0a8d'.
  • 13:08:33 - Instance 'i-e06f0a8f' reported alive
  • 13:08:33 - Sent master public key to worker instance 'i-e06f0a8f'.
  • 13:08:33 - Adding instance i-e46f0a8b to SGE Execution Host list
  • 13:08:44 - Successfully added instance 'i-e46f0a8b' to SGE
  • 13:08:44 - Waiting on worker instance 'i-e46f0a8b' to configure itself...
  • 13:08:44 - Instance 'i-e26f0a8d' already in SGE's @allhosts
  • 13:08:44 - Waiting on worker instance 'i-e26f0a8d' to configure itself...
  • 13:08:45 - Instance 'i-e06f0a8f' already in SGE's @allhosts
  • 13:08:45 - Waiting on worker instance 'i-e06f0a8f' to configure itself...
  • 13:08:50 - Instance 'i-e46f0a8b' ready
  • 13:09:27 - Galaxy service state changed from 'Starting' to 'Running'
  • 22:38:18 - Found '3' idle instances; trying to remove '2'
  • 22:38:18 - Specific termination of instance 'i-e26f0a8d' requested.
  • 22:38:18 - Removing instance 'i-e26f0a8d' from SGE
  • 22:38:18 - Successfully updated @allhosts to remove 'i-e26f0a8d'
  • 22:38:19 - Terminating instance 'i-e26f0a8d'
  • 22:38:19 - Initiated requested termination of instance. Terminating 'i-e26f0a8d'.
  • 22:38:19 - Specific termination of instance 'i-e46f0a8b' requested.
  • 22:38:19 - Removing instance 'i-e46f0a8b' from SGE
  • 22:38:19 - Successfully initiated termination of instance 'i-e26f0a8d'
  • 22:38:19 - Successfully updated @allhosts to remove 'i-e46f0a8b'
  • 22:38:20 - Terminating instance 'i-e46f0a8b'
  • 22:38:20 - Initiated requested termination of instance. Terminating 'i-e46f0a8b'.
  • 22:38:20 - Initiated requested termination of instances. Terminating '2' instances.
  • 22:38:20 - Successfully initiated termination of instance 'i-e46f0a8b'
  • 22:38:41 - Found '1' idle instances; trying to remove '1'
  • 22:38:41 - Specific termination of instance 'i-e06f0a8f' requested.
  • 22:38:41 - Removing instance 'i-e06f0a8f' from SGE
  • 22:38:41 - Successfully updated @allhosts to remove 'i-e06f0a8f'
  • 22:38:42 - Initiated requested termination of instance. Terminating 'i-e06f0a8f'.
  • 22:38:42 - Initiated requested termination of instances. Terminating '1' instances.
  • 22:38:42 - Terminating instance 'i-e06f0a8f'
  • 22:38:42 - Successfully initiated termination of instance 'i-e06f0a8f'
  • 22:38:47 - Instance 'i-e26f0a8d' successfully terminated.
  • 22:38:49 - Instance 'i-e46f0a8b' successfully terminated.
  • 22:38:59 - Adding 3 instance(s)...
  • 22:39:07 - Instance 'i-e06f0a8f' successfully terminated.
  • 22:41:02 - Instance 'i-fa096e95' reported alive
  • 22:41:02 - Sent master public key to worker instance 'i-fa096e95'.
  • 22:41:06 - Adding instance i-fa096e95 to SGE Execution Host list
  • 22:41:17 - Successfully added instance 'i-fa096e95' to SGE
  • 22:41:17 - Waiting on worker instance 'i-fa096e95' to configure itself...
  • 22:41:17 - Instance 'i-fe096e91' reported alive
  • 22:41:17 - Sent master public key to worker instance 'i-fe096e91'.
  • 22:41:22 - Adding instance i-fe096e91 to SGE Execution Host list
  • 22:41:34 - Successfully added instance 'i-fe096e91' to SGE
  • 22:41:34 - Waiting on worker instance 'i-fe096e91' to configure itself...
  • 22:41:34 - Instance 'i-fa096e95' ready
  • 22:41:52 - Instance 'i-fe096e91' ready
  • 22:42:28 - Instance 'i-fc096e93' reported alive
  • 22:42:28 - Sent master public key to worker instance 'i-fc096e93'.
  • 22:42:38 - Adding instance i-fc096e93 to SGE Execution Host list
  • 22:42:49 - Successfully added instance 'i-fc096e93' to SGE
  • 22:42:49 - Waiting on worker instance 'i-fc096e93' to configure itself...
  • 22:43:13 - Instance 'i-fc096e93' ready
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory


___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

  http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

  http://lists.bx.psu.edu/
Reply | Threaded
Open this post in threaded view
|

Re: Help!!!!!! with Galaxy Cloud!!!!!

Enis Afgan-2
Hi Mike, 
Try accessing your Galaxy instance now. It should be ok.

The link in your email contained the IP for your instance so I took the liberty of restarting Galaxy and that brought it back up. There seems to have been an issue with Galaxy accessing its database and that resulted in Galaxy crashing. We'll look into why that happened in the first place but should be ok now. 

Let me know if you have any more trouble,
Enis

On Tue, Apr 12, 2011 at 2:49 PM, Mike Dufault <[hidden email]> wrote:
Hello Galaxy Staff,

My data has been running on the Amazon EC2 for just over 24hrs. I have not closed any windows and my Exome analysis made it all the way through to filter on Pile up. I have two tabs for this instance. One is the Galaxy Cloudman Console and the other is the tab where I perform the analysis, load data, history etc.

Anyway, I went to add a step to the work flow and the screen "Welcome Galaxy to the Cloud" screen along with the information "There is no Galaxy instance running on this host, or the Galaxy instance is not responding. To manage Galaxy on this host, please use the Cloud Console."

What happened???

When I go back to the Galaxy Cloudman Console, it shows that my instance is still running along with the four cores, the Cluster log is below. AWS also shows that my instance is running.

Will the work flow finish? Can I get my data? How?

I tried to re-access the analysis page by selecting "Access Galaxy" from the "Galaxy Cloudman Console" but it sends me to the same "Welcome page."

Is there a way to get back into the analysis page?

Please help!!!

Thanks,
Mike

The cluster log shows:
  • 13:05:24 - Master starting
  • 13:05:25 - Completed initial cluster configuration.
  • 13:05:33 - Starting service 'SGE'
  • 13:05:48 - Configuring SGE...
  • 13:05:56 - Successfully setup SGE; configuring SGE
  • 13:05:57 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Saved file 'cm_boot.py' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Saved file 'cm.tar.gz' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Problem connecting to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3', attempt 1/5
  • 13:05:59 - Saved file 'Fam122261.clusterName' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:06:24 - Initializing a 'Galaxy' cluster.
  • 13:06:24 - Retrieved file 'snaps.yaml' from bucket 'cloudman' to 'cm_snaps.yaml'.
  • 13:06:41 - Adding 3 instance(s)...
  • 13:07:02 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'universe_wsgi.ini.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'tool_conf.xml.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:48 - Error mounting file system '/mnt/galaxyData' from '/dev/sdg3', running command '/bin/mount /dev/sdg3 /mnt/galaxyData' returned code '32' and following stderr: 'mount: you must specify the filesystem type '
  • 13:07:52 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:52 - Starting service 'Postgres'
  • 13:07:52 - PostgreSQL data directory '/mnt/galaxyData/pgsql/data' does not exist (yet?)
  • 13:07:52 - Configuring PostgreSQL with a database for Galaxy...
  • 13:08:05 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:05 - Starting service 'Galaxy'
  • 13:08:05 - Galaxy daemon not running.
  • 13:08:05 - Galaxy service state changed from 'Starting' to 'Error'
  • 13:08:05 - Setting up Galaxy application
  • 13:08:05 - Retrieved file 'universe_wsgi.ini.cloud' from bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3' to '/mnt/galaxyTools/galaxy-central/universe_wsgi.ini'.
  • 13:08:05 - Retrieved file 'tool_conf.xml.cloud' from bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3' to '/mnt/galaxyTools/galaxy-central/tool_conf.xml'.
  • 13:08:05 - Retrieved file 'tool_data_table_conf.xml.cloud' from bucket 'cloudman' to '/mnt/galaxyTools/galaxy-central/tool_data_table_conf.xml.cloud'.
  • 13:08:05 - Starting Galaxy...
  • 13:08:09 - Galaxy service state changed from 'Error' to 'Starting'
  • 13:08:09 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:09 - Saved file 'tool_data_table_conf.xml.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:28 - Instance 'i-e46f0a8b' reported alive
  • 13:08:28 - Successfully generated root user's public key.
  • 13:08:28 - Sent master public key to worker instance 'i-e46f0a8b'.
  • 13:08:28 - Instance 'i-e26f0a8d' reported alive
  • 13:08:28 - Sent master public key to worker instance 'i-e26f0a8d'.
  • 13:08:33 - Instance 'i-e06f0a8f' reported alive
  • 13:08:33 - Sent master public key to worker instance 'i-e06f0a8f'.
  • 13:08:33 - Adding instance i-e46f0a8b to SGE Execution Host list
  • 13:08:44 - Successfully added instance 'i-e46f0a8b' to SGE
  • 13:08:44 - Waiting on worker instance 'i-e46f0a8b' to configure itself...
  • 13:08:44 - Instance 'i-e26f0a8d' already in SGE's @allhosts
  • 13:08:44 - Waiting on worker instance 'i-e26f0a8d' to configure itself...
  • 13:08:45 - Instance 'i-e06f0a8f' already in SGE's @allhosts
  • 13:08:45 - Waiting on worker instance 'i-e06f0a8f' to configure itself...
  • 13:08:50 - Instance 'i-e46f0a8b' ready
  • 13:09:27 - Galaxy service state changed from 'Starting' to 'Running'
  • 22:38:18 - Found '3' idle instances; trying to remove '2'
  • 22:38:18 - Specific termination of instance 'i-e26f0a8d' requested.
  • 22:38:18 - Removing instance 'i-e26f0a8d' from SGE
  • 22:38:18 - Successfully updated @allhosts to remove 'i-e26f0a8d'
  • 22:38:19 - Terminating instance 'i-e26f0a8d'
  • 22:38:19 - Initiated requested termination of instance. Terminating 'i-e26f0a8d'.
  • 22:38:19 - Specific termination of instance 'i-e46f0a8b' requested.
  • 22:38:19 - Removing instance 'i-e46f0a8b' from SGE
  • 22:38:19 - Successfully initiated termination of instance 'i-e26f0a8d'
  • 22:38:19 - Successfully updated @allhosts to remove 'i-e46f0a8b'
  • 22:38:20 - Terminating instance 'i-e46f0a8b'
  • 22:38:20 - Initiated requested termination of instance. Terminating 'i-e46f0a8b'.
  • 22:38:20 - Initiated requested termination of instances. Terminating '2' instances.
  • 22:38:20 - Successfully initiated termination of instance 'i-e46f0a8b'
  • 22:38:41 - Found '1' idle instances; trying to remove '1'
  • 22:38:41 - Specific termination of instance 'i-e06f0a8f' requested.
  • 22:38:41 - Removing instance 'i-e06f0a8f' from SGE
  • 22:38:41 - Successfully updated @allhosts to remove 'i-e06f0a8f'
  • 22:38:42 - Initiated requested termination of instance. Terminating 'i-e06f0a8f'.
  • 22:38:42 - Initiated requested termination of instances. Terminating '1' instances.
  • 22:38:42 - Terminating instance 'i-e06f0a8f'
  • 22:38:42 - Successfully initiated termination of instance 'i-e06f0a8f'
  • 22:38:47 - Instance 'i-e26f0a8d' successfully terminated.
  • 22:38:49 - Instance 'i-e46f0a8b' successfully terminated.
  • 22:38:59 - Adding 3 instance(s)...
  • 22:39:07 - Instance 'i-e06f0a8f' successfully terminated.
  • 22:41:02 - Instance 'i-fa096e95' reported alive
  • 22:41:02 - Sent master public key to worker instance 'i-fa096e95'.
  • 22:41:06 - Adding instance i-fa096e95 to SGE Execution Host list
  • 22:41:17 - Successfully added instance 'i-fa096e95' to SGE
  • 22:41:17 - Waiting on worker instance 'i-fa096e95' to configure itself...
  • 22:41:17 - Instance 'i-fe096e91' reported alive
  • 22:41:17 - Sent master public key to worker instance 'i-fe096e91'.
  • 22:41:22 - Adding instance i-fe096e91 to SGE Execution Host list
  • 22:41:34 - Successfully added instance 'i-fe096e91' to SGE
  • 22:41:34 - Waiting on worker instance 'i-fe096e91' to configure itself...
  • 22:41:34 - Instance 'i-fa096e95' ready
  • 22:41:52 - Instance 'i-fe096e91' ready
  • 22:42:28 - Instance 'i-fc096e93' reported alive
  • 22:42:28 - Sent master public key to worker instance 'i-fc096e93'.
  • 22:42:38 - Adding instance i-fc096e93 to SGE Execution Host list
  • 22:42:49 - Successfully added instance 'i-fc096e93' to SGE
  • 22:42:49 - Waiting on worker instance 'i-fc096e93' to configure itself...
  • 22:43:13 - Instance 'i-fc096e93' ready
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory


___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

 http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

 http://lists.bx.psu.edu/


___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

  http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

  http://lists.bx.psu.edu/
Reply | Threaded
Open this post in threaded view
|

Re: Help!!!!!! with Galaxy Cloud!!!!!

Mike Dufault
Hi Enis,

Thanks for looking into this.

From the Galaxy Cloudman Console, I can see that it was restarted from the log (thanks), but the "Access Galaxy" choice is still grayed out and I don't know how to access the Analysis window.

Is there a way back into my analysis?

Thanks,
Mike



--- On Tue, 4/12/11, Enis Afgan <[hidden email]> wrote:

From: Enis Afgan <[hidden email]>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "Mike Dufault" <[hidden email]>
Cc: "Anton Nekrutenko" <[hidden email]>, [hidden email]
Date: Tuesday, April 12, 2011, 8:55 PM

Hi Mike, 
Try accessing your Galaxy instance now. It should be ok.

The link in your email contained the IP for your instance so I took the liberty of restarting Galaxy and that brought it back up. There seems to have been an issue with Galaxy accessing its database and that resulted in Galaxy crashing. We'll look into why that happened in the first place but should be ok now. 

Let me know if you have any more trouble,
Enis

On Tue, Apr 12, 2011 at 2:49 PM, Mike Dufault <dufaultm@...> wrote:
Hello Galaxy Staff,

My data has been running on the Amazon EC2 for just over 24hrs. I have not closed any windows and my Exome analysis made it all the way through to filter on Pile up. I have two tabs for this instance. One is the Galaxy Cloudman Console and the other is the tab where I perform the analysis, load data, history etc.

Anyway, I went to add a step to the work flow and the screen "Welcome Galaxy to the Cloud" screen along with the information "There is no Galaxy instance running on this host, or the Galaxy instance is not responding. To manage Galaxy on this host, please use the Cloud Console."

What happened???

When I go back to the Galaxy Cloudman Console, it shows that my instance is still running along with the four cores, the Cluster log is below. AWS also shows that my instance is running.

Will the work flow finish? Can I get my data? How?

I tried to re-access the analysis page by selecting "Access Galaxy" from the "Galaxy Cloudman Console" but it sends me to the same "Welcome page."

Is there a way to get back into the analysis page?

Please help!!!

Thanks,
Mike

The cluster log shows:
  • 13:05:24 - Master starting
  • 13:05:25 - Completed initial cluster configuration.
  • 13:05:33 - Starting service 'SGE'
  • 13:05:48 - Configuring SGE...
  • 13:05:56 - Successfully setup SGE; configuring SGE
  • 13:05:57 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Saved file 'cm_boot.py' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Saved file 'cm.tar.gz' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Problem connecting to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3', attempt 1/5
  • 13:05:59 - Saved file 'Fam122261.clusterName' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:06:24 - Initializing a 'Galaxy' cluster.
  • 13:06:24 - Retrieved file 'snaps.yaml' from bucket 'cloudman' to 'cm_snaps.yaml'.
  • 13:06:41 - Adding 3 instance(s)...
  • 13:07:02 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'universe_wsgi.ini.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'tool_conf.xml.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:48 - Error mounting file system '/mnt/galaxyData' from '/dev/sdg3', running command '/bin/mount /dev/sdg3 /mnt/galaxyData' returned code '32' and following stderr: 'mount: you must specify the filesystem type '
  • 13:07:52 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:52 - Starting service 'Postgres'
  • 13:07:52 - PostgreSQL data directory '/mnt/galaxyData/pgsql/data' does not exist (yet?)
  • 13:07:52 - Configuring PostgreSQL with a database for Galaxy...
  • 13:08:05 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:05 - Starting service 'Galaxy'
  • 13:08:05 - Galaxy daemon not running.
  • 13:08:05 - Galaxy service state changed from 'Starting' to 'Error'
  • 13:08:05 - Setting up Galaxy application
  • 13:08:05 - Retrieved file 'universe_wsgi.ini.cloud' from bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3' to '/mnt/galaxyTools/galaxy-central/universe_wsgi.ini'.
  • 13:08:05 - Retrieved file 'tool_conf.xml.cloud' from bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3' to '/mnt/galaxyTools/galaxy-central/tool_conf.xml'.
  • 13:08:05 - Retrieved file 'tool_data_table_conf.xml.cloud' from bucket 'cloudman' to '/mnt/galaxyTools/galaxy-central/tool_data_table_conf.xml.cloud'.
  • 13:08:05 - Starting Galaxy...
  • 13:08:09 - Galaxy service state changed from 'Error' to 'Starting'
  • 13:08:09 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:09 - Saved file 'tool_data_table_conf.xml.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:28 - Instance 'i-e46f0a8b' reported alive
  • 13:08:28 - Successfully generated root user's public key.
  • 13:08:28 - Sent master public key to worker instance 'i-e46f0a8b'.
  • 13:08:28 - Instance 'i-e26f0a8d' reported alive
  • 13:08:28 - Sent master public key to worker instance 'i-e26f0a8d'.
  • 13:08:33 - Instance 'i-e06f0a8f' reported alive
  • 13:08:33 - Sent master public key to worker instance 'i-e06f0a8f'.
  • 13:08:33 - Adding instance i-e46f0a8b to SGE Execution Host list
  • 13:08:44 - Successfully added instance 'i-e46f0a8b' to SGE
  • 13:08:44 - Waiting on worker instance 'i-e46f0a8b' to configure itself...
  • 13:08:44 - Instance 'i-e26f0a8d' already in SGE's @allhosts
  • 13:08:44 - Waiting on worker instance 'i-e26f0a8d' to configure itself...
  • 13:08:45 - Instance 'i-e06f0a8f' already in SGE's @allhosts
  • 13:08:45 - Waiting on worker instance 'i-e06f0a8f' to configure itself...
  • 13:08:50 - Instance 'i-e46f0a8b' ready
  • 13:09:27 - Galaxy service state changed from 'Starting' to 'Running'
  • 22:38:18 - Found '3' idle instances; trying to remove '2'
  • 22:38:18 - Specific termination of instance 'i-e26f0a8d' requested.
  • 22:38:18 - Removing instance 'i-e26f0a8d' from SGE
  • 22:38:18 - Successfully updated @allhosts to remove 'i-e26f0a8d'
  • 22:38:19 - Terminating instance 'i-e26f0a8d'
  • 22:38:19 - Initiated requested termination of instance. Terminating 'i-e26f0a8d'.
  • 22:38:19 - Specific termination of instance 'i-e46f0a8b' requested.
  • 22:38:19 - Removing instance 'i-e46f0a8b' from SGE
  • 22:38:19 - Successfully initiated termination of instance 'i-e26f0a8d'
  • 22:38:19 - Successfully updated @allhosts to remove 'i-e46f0a8b'
  • 22:38:20 - Terminating instance 'i-e46f0a8b'
  • 22:38:20 - Initiated requested termination of instance. Terminating 'i-e46f0a8b'.
  • 22:38:20 - Initiated requested termination of instances. Terminating '2' instances.
  • 22:38:20 - Successfully initiated termination of instance 'i-e46f0a8b'
  • 22:38:41 - Found '1' idle instances; trying to remove '1'
  • 22:38:41 - Specific termination of instance 'i-e06f0a8f' requested.
  • 22:38:41 - Removing instance 'i-e06f0a8f' from SGE
  • 22:38:41 - Successfully updated @allhosts to remove 'i-e06f0a8f'
  • 22:38:42 - Initiated requested termination of instance. Terminating 'i-e06f0a8f'.
  • 22:38:42 - Initiated requested termination of instances. Terminating '1' instances.
  • 22:38:42 - Terminating instance 'i-e06f0a8f'
  • 22:38:42 - Successfully initiated termination of instance 'i-e06f0a8f'
  • 22:38:47 - Instance 'i-e26f0a8d' successfully terminated.
  • 22:38:49 - Instance 'i-e46f0a8b' successfully terminated.
  • 22:38:59 - Adding 3 instance(s)...
  • 22:39:07 - Instance 'i-e06f0a8f' successfully terminated.
  • 22:41:02 - Instance 'i-fa096e95' reported alive
  • 22:41:02 - Sent master public key to worker instance 'i-fa096e95'.
  • 22:41:06 - Adding instance i-fa096e95 to SGE Execution Host list
  • 22:41:17 - Successfully added instance 'i-fa096e95' to SGE
  • 22:41:17 - Waiting on worker instance 'i-fa096e95' to configure itself...
  • 22:41:17 - Instance 'i-fe096e91' reported alive
  • 22:41:17 - Sent master public key to worker instance 'i-fe096e91'.
  • 22:41:22 - Adding instance i-fe096e91 to SGE Execution Host list
  • 22:41:34 - Successfully added instance 'i-fe096e91' to SGE
  • 22:41:34 - Waiting on worker instance 'i-fe096e91' to configure itself...
  • 22:41:34 - Instance 'i-fa096e95' ready
  • 22:41:52 - Instance 'i-fe096e91' ready
  • 22:42:28 - Instance 'i-fc096e93' reported alive
  • 22:42:28 - Sent master public key to worker instance 'i-fc096e93'.
  • 22:42:38 - Adding instance i-fc096e93 to SGE Execution Host list
  • 22:42:49 - Successfully added instance 'i-fc096e93' to SGE
  • 22:42:49 - Waiting on worker instance 'i-fc096e93' to configure itself...
  • 22:43:13 - Instance 'i-fc096e93' ready
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory


___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

 http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

 http://lists.bx.psu.edu/


___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

  http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

  http://lists.bx.psu.edu/
Reply | Threaded
Open this post in threaded view
|

Re: Help!!!!!! with Galaxy Cloud!!!!!

Enis Afgan-2
Ahh, for some reason cloudman is thinking Galaxy is not 'running' but still 'starting' and has thus not enabled the given button. To access the analysis, in your browser, just delete the '/cloud' part of the URL and that should load Galaxy.

Sorry about the confusion,
Enis

On Tue, Apr 12, 2011 at 9:12 PM, Mike Dufault <[hidden email]> wrote:
Hi Enis,

Thanks for looking into this.

From the Galaxy Cloudman Console, I can see that it was restarted from the log (thanks), but the "Access Galaxy" choice is still grayed out and I don't know how to access the Analysis window.

Is there a way back into my analysis?

Thanks,
Mike



--- On Tue, 4/12/11, Enis Afgan <[hidden email]> wrote:

From: Enis Afgan <[hidden email]>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "Mike Dufault" <[hidden email]>
Cc: "Anton Nekrutenko" <[hidden email]>, [hidden email]
Date: Tuesday, April 12, 2011, 8:55 PM


Hi Mike, 
Try accessing your Galaxy instance now. It should be ok.

The link in your email contained the IP for your instance so I took the liberty of restarting Galaxy and that brought it back up. There seems to have been an issue with Galaxy accessing its database and that resulted in Galaxy crashing. We'll look into why that happened in the first place but should be ok now. 

Let me know if you have any more trouble,
Enis

On Tue, Apr 12, 2011 at 2:49 PM, Mike Dufault <dufaultm@...> wrote:
Hello Galaxy Staff,

My data has been running on the Amazon EC2 for just over 24hrs. I have not closed any windows and my Exome analysis made it all the way through to filter on Pile up. I have two tabs for this instance. One is the Galaxy Cloudman Console and the other is the tab where I perform the analysis, load data, history etc.

Anyway, I went to add a step to the work flow and the screen "Welcome Galaxy to the Cloud" screen along with the information "There is no Galaxy instance running on this host, or the Galaxy instance is not responding. To manage Galaxy on this host, please use the Cloud Console."

What happened???

When I go back to the Galaxy Cloudman Console, it shows that my instance is still running along with the four cores, the Cluster log is below. AWS also shows that my instance is running.

Will the work flow finish? Can I get my data? How?

I tried to re-access the analysis page by selecting "Access Galaxy" from the "Galaxy Cloudman Console" but it sends me to the same "Welcome page."

Is there a way to get back into the analysis page?

Please help!!!

Thanks,
Mike

The cluster log shows:
  • 13:05:24 - Master starting
  • 13:05:25 - Completed initial cluster configuration.
  • 13:05:33 - Starting service 'SGE'
  • 13:05:48 - Configuring SGE...
  • 13:05:56 - Successfully setup SGE; configuring SGE
  • 13:05:57 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Saved file 'cm_boot.py' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Saved file 'cm.tar.gz' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Problem connecting to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3', attempt 1/5
  • 13:05:59 - Saved file 'Fam122261.clusterName' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:06:24 - Initializing a 'Galaxy' cluster.
  • 13:06:24 - Retrieved file 'snaps.yaml' from bucket 'cloudman' to 'cm_snaps.yaml'.
  • 13:06:41 - Adding 3 instance(s)...
  • 13:07:02 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'universe_wsgi.ini.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'tool_conf.xml.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:48 - Error mounting file system '/mnt/galaxyData' from '/dev/sdg3', running command '/bin/mount /dev/sdg3 /mnt/galaxyData' returned code '32' and following stderr: 'mount: you must specify the filesystem type '
  • 13:07:52 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:52 - Starting service 'Postgres'
  • 13:07:52 - PostgreSQL data directory '/mnt/galaxyData/pgsql/data' does not exist (yet?)
  • 13:07:52 - Configuring PostgreSQL with a database for Galaxy...
  • 13:08:05 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:05 - Starting service 'Galaxy'
  • 13:08:05 - Galaxy daemon not running.
  • 13:08:05 - Galaxy service state changed from 'Starting' to 'Error'
  • 13:08:05 - Setting up Galaxy application
  • 13:08:05 - Retrieved file 'universe_wsgi.ini.cloud' from bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3' to '/mnt/galaxyTools/galaxy-central/universe_wsgi.ini'.
  • 13:08:05 - Retrieved file 'tool_conf.xml.cloud' from bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3' to '/mnt/galaxyTools/galaxy-central/tool_conf.xml'.
  • 13:08:05 - Retrieved file 'tool_data_table_conf.xml.cloud' from bucket 'cloudman' to '/mnt/galaxyTools/galaxy-central/tool_data_table_conf.xml.cloud'.
  • 13:08:05 - Starting Galaxy...
  • 13:08:09 - Galaxy service state changed from 'Error' to 'Starting'
  • 13:08:09 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:09 - Saved file 'tool_data_table_conf.xml.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:28 - Instance 'i-e46f0a8b' reported alive
  • 13:08:28 - Successfully generated root user's public key.
  • 13:08:28 - Sent master public key to worker instance 'i-e46f0a8b'.
  • 13:08:28 - Instance 'i-e26f0a8d' reported alive
  • 13:08:28 - Sent master public key to worker instance 'i-e26f0a8d'.
  • 13:08:33 - Instance 'i-e06f0a8f' reported alive
  • 13:08:33 - Sent master public key to worker instance 'i-e06f0a8f'.
  • 13:08:33 - Adding instance i-e46f0a8b to SGE Execution Host list
  • 13:08:44 - Successfully added instance 'i-e46f0a8b' to SGE
  • 13:08:44 - Waiting on worker instance 'i-e46f0a8b' to configure itself...
  • 13:08:44 - Instance 'i-e26f0a8d' already in SGE's @allhosts
  • 13:08:44 - Waiting on worker instance 'i-e26f0a8d' to configure itself...
  • 13:08:45 - Instance 'i-e06f0a8f' already in SGE's @allhosts
  • 13:08:45 - Waiting on worker instance 'i-e06f0a8f' to configure itself...
  • 13:08:50 - Instance 'i-e46f0a8b' ready
  • 13:09:27 - Galaxy service state changed from 'Starting' to 'Running'
  • 22:38:18 - Found '3' idle instances; trying to remove '2'
  • 22:38:18 - Specific termination of instance 'i-e26f0a8d' requested.
  • 22:38:18 - Removing instance 'i-e26f0a8d' from SGE
  • 22:38:18 - Successfully updated @allhosts to remove 'i-e26f0a8d'
  • 22:38:19 - Terminating instance 'i-e26f0a8d'
  • 22:38:19 - Initiated requested termination of instance. Terminating 'i-e26f0a8d'.
  • 22:38:19 - Specific termination of instance 'i-e46f0a8b' requested.
  • 22:38:19 - Removing instance 'i-e46f0a8b' from SGE
  • 22:38:19 - Successfully initiated termination of instance 'i-e26f0a8d'
  • 22:38:19 - Successfully updated @allhosts to remove 'i-e46f0a8b'
  • 22:38:20 - Terminating instance 'i-e46f0a8b'
  • 22:38:20 - Initiated requested termination of instance. Terminating 'i-e46f0a8b'.
  • 22:38:20 - Initiated requested termination of instances. Terminating '2' instances.
  • 22:38:20 - Successfully initiated termination of instance 'i-e46f0a8b'
  • 22:38:41 - Found '1' idle instances; trying to remove '1'
  • 22:38:41 - Specific termination of instance 'i-e06f0a8f' requested.
  • 22:38:41 - Removing instance 'i-e06f0a8f' from SGE
  • 22:38:41 - Successfully updated @allhosts to remove 'i-e06f0a8f'
  • 22:38:42 - Initiated requested termination of instance. Terminating 'i-e06f0a8f'.
  • 22:38:42 - Initiated requested termination of instances. Terminating '1' instances.
  • 22:38:42 - Terminating instance 'i-e06f0a8f'
  • 22:38:42 - Successfully initiated termination of instance 'i-e06f0a8f'
  • 22:38:47 - Instance 'i-e26f0a8d' successfully terminated.
  • 22:38:49 - Instance 'i-e46f0a8b' successfully terminated.
  • 22:38:59 - Adding 3 instance(s)...
  • 22:39:07 - Instance 'i-e06f0a8f' successfully terminated.
  • 22:41:02 - Instance 'i-fa096e95' reported alive
  • 22:41:02 - Sent master public key to worker instance 'i-fa096e95'.
  • 22:41:06 - Adding instance i-fa096e95 to SGE Execution Host list
  • 22:41:17 - Successfully added instance 'i-fa096e95' to SGE
  • 22:41:17 - Waiting on worker instance 'i-fa096e95' to configure itself...
  • 22:41:17 - Instance 'i-fe096e91' reported alive
  • 22:41:17 - Sent master public key to worker instance 'i-fe096e91'.
  • 22:41:22 - Adding instance i-fe096e91 to SGE Execution Host list
  • 22:41:34 - Successfully added instance 'i-fe096e91' to SGE
  • 22:41:34 - Waiting on worker instance 'i-fe096e91' to configure itself...
  • 22:41:34 - Instance 'i-fa096e95' ready
  • 22:41:52 - Instance 'i-fe096e91' ready
  • 22:42:28 - Instance 'i-fc096e93' reported alive
  • 22:42:28 - Sent master public key to worker instance 'i-fc096e93'.
  • 22:42:38 - Adding instance i-fc096e93 to SGE Execution Host list
  • 22:42:49 - Successfully added instance 'i-fc096e93' to SGE
  • 22:42:49 - Waiting on worker instance 'i-fc096e93' to configure itself...
  • 22:43:13 - Instance 'i-fc096e93' ready
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory


___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

 http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

 http://lists.bx.psu.edu/



___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

  http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

  http://lists.bx.psu.edu/
Reply | Threaded
Open this post in threaded view
|

Re: Help!!!!!! with Galaxy Cloud!!!!!

Mike Dufault
Hi Enis,

THANK YOU!!!

I see that my "filter pileup on data" step is running. Is this the same analysis that was running before or did it relauch when you restarted Galaxy? I just don't know if the analysis would be compromised.

Thanks again to you and the whole Galaxy team.

Best,
Mike

--- On Tue, 4/12/11, Enis Afgan <[hidden email]> wrote:

From: Enis Afgan <[hidden email]>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "Mike Dufault" <[hidden email]>
Cc: "Anton Nekrutenko" <[hidden email]>, [hidden email]
Date: Tuesday, April 12, 2011, 9:16 PM

Ahh, for some reason cloudman is thinking Galaxy is not 'running' but still 'starting' and has thus not enabled the given button. To access the analysis, in your browser, just delete the '/cloud' part of the URL and that should load Galaxy.

Sorry about the confusion,
Enis

On Tue, Apr 12, 2011 at 9:12 PM, Mike Dufault <dufaultm@...> wrote:
Hi Enis,

Thanks for looking into this.

From the Galaxy Cloudman Console, I can see that it was restarted from the log (thanks), but the "Access Galaxy" choice is still grayed out and I don't know how to access the Analysis window.

Is there a way back into my analysis?

Thanks,
Mike



--- On Tue, 4/12/11, Enis Afgan <eafgan@...> wrote:

From: Enis Afgan <eafgan@...>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "Mike Dufault" <dufaultm@...>
Cc: "Anton Nekrutenko" <anton@...>, galaxy-user@...
Date: Tuesday, April 12, 2011, 8:55 PM


Hi Mike, 
Try accessing your Galaxy instance now. It should be ok.

The link in your email contained the IP for your instance so I took the liberty of restarting Galaxy and that brought it back up. There seems to have been an issue with Galaxy accessing its database and that resulted in Galaxy crashing. We'll look into why that happened in the first place but should be ok now. 

Let me know if you have any more trouble,
Enis

On Tue, Apr 12, 2011 at 2:49 PM, Mike Dufault <dufaultm@...> wrote:
Hello Galaxy Staff,

My data has been running on the Amazon EC2 for just over 24hrs. I have not closed any windows and my Exome analysis made it all the way through to filter on Pile up. I have two tabs for this instance. One is the Galaxy Cloudman Console and the other is the tab where I perform the analysis, load data, history etc.

Anyway, I went to add a step to the work flow and the screen "Welcome Galaxy to the Cloud" screen along with the information "There is no Galaxy instance running on this host, or the Galaxy instance is not responding. To manage Galaxy on this host, please use the Cloud Console."

What happened???

When I go back to the Galaxy Cloudman Console, it shows that my instance is still running along with the four cores, the Cluster log is below. AWS also shows that my instance is running.

Will the work flow finish? Can I get my data? How?

I tried to re-access the analysis page by selecting "Access Galaxy" from the "Galaxy Cloudman Console" but it sends me to the same "Welcome page."

Is there a way to get back into the analysis page?

Please help!!!

Thanks,
Mike

The cluster log shows:
  • 13:05:24 - Master starting
  • 13:05:25 - Completed initial cluster configuration.
  • 13:05:33 - Starting service 'SGE'
  • 13:05:48 - Configuring SGE...
  • 13:05:56 - Successfully setup SGE; configuring SGE
  • 13:05:57 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Saved file 'cm_boot.py' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Saved file 'cm.tar.gz' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Problem connecting to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3', attempt 1/5
  • 13:05:59 - Saved file 'Fam122261.clusterName' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:06:24 - Initializing a 'Galaxy' cluster.
  • 13:06:24 - Retrieved file 'snaps.yaml' from bucket 'cloudman' to 'cm_snaps.yaml'.
  • 13:06:41 - Adding 3 instance(s)...
  • 13:07:02 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'universe_wsgi.ini.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'tool_conf.xml.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:48 - Error mounting file system '/mnt/galaxyData' from '/dev/sdg3', running command '/bin/mount /dev/sdg3 /mnt/galaxyData' returned code '32' and following stderr: 'mount: you must specify the filesystem type '
  • 13:07:52 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:52 - Starting service 'Postgres'
  • 13:07:52 - PostgreSQL data directory '/mnt/galaxyData/pgsql/data' does not exist (yet?)
  • 13:07:52 - Configuring PostgreSQL with a database for Galaxy...
  • 13:08:05 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:05 - Starting service 'Galaxy'
  • 13:08:05 - Galaxy daemon not running.
  • 13:08:05 - Galaxy service state changed from 'Starting' to 'Error'
  • 13:08:05 - Setting up Galaxy application
  • 13:08:05 - Retrieved file 'universe_wsgi.ini.cloud' from bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3' to '/mnt/galaxyTools/galaxy-central/universe_wsgi.ini'.
  • 13:08:05 - Retrieved file 'tool_conf.xml.cloud' from bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3' to '/mnt/galaxyTools/galaxy-central/tool_conf.xml'.
  • 13:08:05 - Retrieved file 'tool_data_table_conf.xml.cloud' from bucket 'cloudman' to '/mnt/galaxyTools/galaxy-central/tool_data_table_conf.xml.cloud'.
  • 13:08:05 - Starting Galaxy...
  • 13:08:09 - Galaxy service state changed from 'Error' to 'Starting'
  • 13:08:09 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:09 - Saved file 'tool_data_table_conf.xml.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:28 - Instance 'i-e46f0a8b' reported alive
  • 13:08:28 - Successfully generated root user's public key.
  • 13:08:28 - Sent master public key to worker instance 'i-e46f0a8b'.
  • 13:08:28 - Instance 'i-e26f0a8d' reported alive
  • 13:08:28 - Sent master public key to worker instance 'i-e26f0a8d'.
  • 13:08:33 - Instance 'i-e06f0a8f' reported alive
  • 13:08:33 - Sent master public key to worker instance 'i-e06f0a8f'.
  • 13:08:33 - Adding instance i-e46f0a8b to SGE Execution Host list
  • 13:08:44 - Successfully added instance 'i-e46f0a8b' to SGE
  • 13:08:44 - Waiting on worker instance 'i-e46f0a8b' to configure itself...
  • 13:08:44 - Instance 'i-e26f0a8d' already in SGE's @allhosts
  • 13:08:44 - Waiting on worker instance 'i-e26f0a8d' to configure itself...
  • 13:08:45 - Instance 'i-e06f0a8f' already in SGE's @allhosts
  • 13:08:45 - Waiting on worker instance 'i-e06f0a8f' to configure itself...
  • 13:08:50 - Instance 'i-e46f0a8b' ready
  • 13:09:27 - Galaxy service state changed from 'Starting' to 'Running'
  • 22:38:18 - Found '3' idle instances; trying to remove '2'
  • 22:38:18 - Specific termination of instance 'i-e26f0a8d' requested.
  • 22:38:18 - Removing instance 'i-e26f0a8d' from SGE
  • 22:38:18 - Successfully updated @allhosts to remove 'i-e26f0a8d'
  • 22:38:19 - Terminating instance 'i-e26f0a8d'
  • 22:38:19 - Initiated requested termination of instance. Terminating 'i-e26f0a8d'.
  • 22:38:19 - Specific termination of instance 'i-e46f0a8b' requested.
  • 22:38:19 - Removing instance 'i-e46f0a8b' from SGE
  • 22:38:19 - Successfully initiated termination of instance 'i-e26f0a8d'
  • 22:38:19 - Successfully updated @allhosts to remove 'i-e46f0a8b'
  • 22:38:20 - Terminating instance 'i-e46f0a8b'
  • 22:38:20 - Initiated requested termination of instance. Terminating 'i-e46f0a8b'.
  • 22:38:20 - Initiated requested termination of instances. Terminating '2' instances.
  • 22:38:20 - Successfully initiated termination of instance 'i-e46f0a8b'
  • 22:38:41 - Found '1' idle instances; trying to remove '1'
  • 22:38:41 - Specific termination of instance 'i-e06f0a8f' requested.
  • 22:38:41 - Removing instance 'i-e06f0a8f' from SGE
  • 22:38:41 - Successfully updated @allhosts to remove 'i-e06f0a8f'
  • 22:38:42 - Initiated requested termination of instance. Terminating 'i-e06f0a8f'.
  • 22:38:42 - Initiated requested termination of instances. Terminating '1' instances.
  • 22:38:42 - Terminating instance 'i-e06f0a8f'
  • 22:38:42 - Successfully initiated termination of instance 'i-e06f0a8f'
  • 22:38:47 - Instance 'i-e26f0a8d' successfully terminated.
  • 22:38:49 - Instance 'i-e46f0a8b' successfully terminated.
  • 22:38:59 - Adding 3 instance(s)...
  • 22:39:07 - Instance 'i-e06f0a8f' successfully terminated.
  • 22:41:02 - Instance 'i-fa096e95' reported alive
  • 22:41:02 - Sent master public key to worker instance 'i-fa096e95'.
  • 22:41:06 - Adding instance i-fa096e95 to SGE Execution Host list
  • 22:41:17 - Successfully added instance 'i-fa096e95' to SGE
  • 22:41:17 - Waiting on worker instance 'i-fa096e95' to configure itself...
  • 22:41:17 - Instance 'i-fe096e91' reported alive
  • 22:41:17 - Sent master public key to worker instance 'i-fe096e91'.
  • 22:41:22 - Adding instance i-fe096e91 to SGE Execution Host list
  • 22:41:34 - Successfully added instance 'i-fe096e91' to SGE
  • 22:41:34 - Waiting on worker instance 'i-fe096e91' to configure itself...
  • 22:41:34 - Instance 'i-fa096e95' ready
  • 22:41:52 - Instance 'i-fe096e91' ready
  • 22:42:28 - Instance 'i-fc096e93' reported alive
  • 22:42:28 - Sent master public key to worker instance 'i-fc096e93'.
  • 22:42:38 - Adding instance i-fc096e93 to SGE Execution Host list
  • 22:42:49 - Successfully added instance 'i-fc096e93' to SGE
  • 22:42:49 - Waiting on worker instance 'i-fc096e93' to configure itself...
  • 22:43:13 - Instance 'i-fc096e93' ready
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory


___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

 http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

 http://lists.bx.psu.edu/



___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

  http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

  http://lists.bx.psu.edu/
Reply | Threaded
Open this post in threaded view
|

Re: Help!!!!!! with Galaxy Cloud!!!!!

Enis Afgan-2
Galaxy has the functionality to recover any jobs that were running after it's restarted so it is quite possible to for the job to still be running. In addition, from the cloudman console, it appears that at least one instance is pretty heavily loaded so that can also mean that the job is still running. However, without actually accessing the instance through the command line and checking the status of the job queue, it is not possible to tell if the job is - actually running. Do you know how to do that? It's just a few commands in the terminal:
- access the instance
[local]$ ssh -i <path to the private key you downloaded from AWS when you created a key pair> ubuntu@<instance public DNS>
- become galaxy user
[ec2]$ sudo su galaxy 
- list any running jobs
[ec2]$ qstat

If that command returns a list of jobs and the jobs are in stare 'r' (running), the job is still running; otherwise, no.

Let me know how it goes,
Enis

On Tue, Apr 12, 2011 at 9:49 PM, Mike Dufault <[hidden email]> wrote:
Hi Enis,

THANK YOU!!!

I see that my "filter pileup on data" step is running. Is this the same analysis that was running before or did it relauch when you restarted Galaxy? I just don't know if the analysis would be compromised.

Thanks again to you and the whole Galaxy team.

Best,

Mike

--- On Tue, 4/12/11, Enis Afgan <[hidden email]> wrote:

From: Enis Afgan <[hidden email]>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "Mike Dufault" <[hidden email]>
Cc: "Anton Nekrutenko" <[hidden email]>, [hidden email]
Date: Tuesday, April 12, 2011, 9:16 PM


Ahh, for some reason cloudman is thinking Galaxy is not 'running' but still 'starting' and has thus not enabled the given button. To access the analysis, in your browser, just delete the '/cloud' part of the URL and that should load Galaxy.

Sorry about the confusion,
Enis

On Tue, Apr 12, 2011 at 9:12 PM, Mike Dufault <dufaultm@...> wrote:
Hi Enis,

Thanks for looking into this.

From the Galaxy Cloudman Console, I can see that it was restarted from the log (thanks), but the "Access Galaxy" choice is still grayed out and I don't know how to access the Analysis window.

Is there a way back into my analysis?

Thanks,
Mike



--- On Tue, 4/12/11, Enis Afgan <eafgan@...> wrote:

From: Enis Afgan <eafgan@...>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "Mike Dufault" <dufaultm@...>
Cc: "Anton Nekrutenko" <anton@...>, galaxy-user@...
Date: Tuesday, April 12, 2011, 8:55 PM


Hi Mike, 
Try accessing your Galaxy instance now. It should be ok.

The link in your email contained the IP for your instance so I took the liberty of restarting Galaxy and that brought it back up. There seems to have been an issue with Galaxy accessing its database and that resulted in Galaxy crashing. We'll look into why that happened in the first place but should be ok now. 

Let me know if you have any more trouble,
Enis

On Tue, Apr 12, 2011 at 2:49 PM, Mike Dufault <dufaultm@...> wrote:
Hello Galaxy Staff,

My data has been running on the Amazon EC2 for just over 24hrs. I have not closed any windows and my Exome analysis made it all the way through to filter on Pile up. I have two tabs for this instance. One is the Galaxy Cloudman Console and the other is the tab where I perform the analysis, load data, history etc.

Anyway, I went to add a step to the work flow and the screen "Welcome Galaxy to the Cloud" screen along with the information "There is no Galaxy instance running on this host, or the Galaxy instance is not responding. To manage Galaxy on this host, please use the Cloud Console."

What happened???

When I go back to the Galaxy Cloudman Console, it shows that my instance is still running along with the four cores, the Cluster log is below. AWS also shows that my instance is running.

Will the work flow finish? Can I get my data? How?

I tried to re-access the analysis page by selecting "Access Galaxy" from the "Galaxy Cloudman Console" but it sends me to the same "Welcome page."

Is there a way to get back into the analysis page?

Please help!!!

Thanks,
Mike

The cluster log shows:
  • 13:05:24 - Master starting
  • 13:05:25 - Completed initial cluster configuration.
  • 13:05:33 - Starting service 'SGE'
  • 13:05:48 - Configuring SGE...
  • 13:05:56 - Successfully setup SGE; configuring SGE
  • 13:05:57 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Saved file 'cm_boot.py' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Saved file 'cm.tar.gz' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Problem connecting to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3', attempt 1/5
  • 13:05:59 - Saved file 'Fam122261.clusterName' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:06:24 - Initializing a 'Galaxy' cluster.
  • 13:06:24 - Retrieved file 'snaps.yaml' from bucket 'cloudman' to 'cm_snaps.yaml'.
  • 13:06:41 - Adding 3 instance(s)...
  • 13:07:02 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'universe_wsgi.ini.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'tool_conf.xml.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:48 - Error mounting file system '/mnt/galaxyData' from '/dev/sdg3', running command '/bin/mount /dev/sdg3 /mnt/galaxyData' returned code '32' and following stderr: 'mount: you must specify the filesystem type '
  • 13:07:52 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:52 - Starting service 'Postgres'
  • 13:07:52 - PostgreSQL data directory '/mnt/galaxyData/pgsql/data' does not exist (yet?)
  • 13:07:52 - Configuring PostgreSQL with a database for Galaxy...
  • 13:08:05 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:05 - Starting service 'Galaxy'
  • 13:08:05 - Galaxy daemon not running.
  • 13:08:05 - Galaxy service state changed from 'Starting' to 'Error'
  • 13:08:05 - Setting up Galaxy application
  • 13:08:05 - Retrieved file 'universe_wsgi.ini.cloud' from bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3' to '/mnt/galaxyTools/galaxy-central/universe_wsgi.ini'.
  • 13:08:05 - Retrieved file 'tool_conf.xml.cloud' from bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3' to '/mnt/galaxyTools/galaxy-central/tool_conf.xml'.
  • 13:08:05 - Retrieved file 'tool_data_table_conf.xml.cloud' from bucket 'cloudman' to '/mnt/galaxyTools/galaxy-central/tool_data_table_conf.xml.cloud'.
  • 13:08:05 - Starting Galaxy...
  • 13:08:09 - Galaxy service state changed from 'Error' to 'Starting'
  • 13:08:09 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:09 - Saved file 'tool_data_table_conf.xml.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:28 - Instance 'i-e46f0a8b' reported alive
  • 13:08:28 - Successfully generated root user's public key.
  • 13:08:28 - Sent master public key to worker instance 'i-e46f0a8b'.
  • 13:08:28 - Instance 'i-e26f0a8d' reported alive
  • 13:08:28 - Sent master public key to worker instance 'i-e26f0a8d'.
  • 13:08:33 - Instance 'i-e06f0a8f' reported alive
  • 13:08:33 - Sent master public key to worker instance 'i-e06f0a8f'.
  • 13:08:33 - Adding instance i-e46f0a8b to SGE Execution Host list
  • 13:08:44 - Successfully added instance 'i-e46f0a8b' to SGE
  • 13:08:44 - Waiting on worker instance 'i-e46f0a8b' to configure itself...
  • 13:08:44 - Instance 'i-e26f0a8d' already in SGE's @allhosts
  • 13:08:44 - Waiting on worker instance 'i-e26f0a8d' to configure itself...
  • 13:08:45 - Instance 'i-e06f0a8f' already in SGE's @allhosts
  • 13:08:45 - Waiting on worker instance 'i-e06f0a8f' to configure itself...
  • 13:08:50 - Instance 'i-e46f0a8b' ready
  • 13:09:27 - Galaxy service state changed from 'Starting' to 'Running'
  • 22:38:18 - Found '3' idle instances; trying to remove '2'
  • 22:38:18 - Specific termination of instance 'i-e26f0a8d' requested.
  • 22:38:18 - Removing instance 'i-e26f0a8d' from SGE
  • 22:38:18 - Successfully updated @allhosts to remove 'i-e26f0a8d'
  • 22:38:19 - Terminating instance 'i-e26f0a8d'
  • 22:38:19 - Initiated requested termination of instance. Terminating 'i-e26f0a8d'.
  • 22:38:19 - Specific termination of instance 'i-e46f0a8b' requested.
  • 22:38:19 - Removing instance 'i-e46f0a8b' from SGE
  • 22:38:19 - Successfully initiated termination of instance 'i-e26f0a8d'
  • 22:38:19 - Successfully updated @allhosts to remove 'i-e46f0a8b'
  • 22:38:20 - Terminating instance 'i-e46f0a8b'
  • 22:38:20 - Initiated requested termination of instance. Terminating 'i-e46f0a8b'.
  • 22:38:20 - Initiated requested termination of instances. Terminating '2' instances.
  • 22:38:20 - Successfully initiated termination of instance 'i-e46f0a8b'
  • 22:38:41 - Found '1' idle instances; trying to remove '1'
  • 22:38:41 - Specific termination of instance 'i-e06f0a8f' requested.
  • 22:38:41 - Removing instance 'i-e06f0a8f' from SGE
  • 22:38:41 - Successfully updated @allhosts to remove 'i-e06f0a8f'
  • 22:38:42 - Initiated requested termination of instance. Terminating 'i-e06f0a8f'.
  • 22:38:42 - Initiated requested termination of instances. Terminating '1' instances.
  • 22:38:42 - Terminating instance 'i-e06f0a8f'
  • 22:38:42 - Successfully initiated termination of instance 'i-e06f0a8f'
  • 22:38:47 - Instance 'i-e26f0a8d' successfully terminated.
  • 22:38:49 - Instance 'i-e46f0a8b' successfully terminated.
  • 22:38:59 - Adding 3 instance(s)...
  • 22:39:07 - Instance 'i-e06f0a8f' successfully terminated.
  • 22:41:02 - Instance 'i-fa096e95' reported alive
  • 22:41:02 - Sent master public key to worker instance 'i-fa096e95'.
  • 22:41:06 - Adding instance i-fa096e95 to SGE Execution Host list
  • 22:41:17 - Successfully added instance 'i-fa096e95' to SGE
  • 22:41:17 - Waiting on worker instance 'i-fa096e95' to configure itself...
  • 22:41:17 - Instance 'i-fe096e91' reported alive
  • 22:41:17 - Sent master public key to worker instance 'i-fe096e91'.
  • 22:41:22 - Adding instance i-fe096e91 to SGE Execution Host list
  • 22:41:34 - Successfully added instance 'i-fe096e91' to SGE
  • 22:41:34 - Waiting on worker instance 'i-fe096e91' to configure itself...
  • 22:41:34 - Instance 'i-fa096e95' ready
  • 22:41:52 - Instance 'i-fe096e91' ready
  • 22:42:28 - Instance 'i-fc096e93' reported alive
  • 22:42:28 - Sent master public key to worker instance 'i-fc096e93'.
  • 22:42:38 - Adding instance i-fc096e93 to SGE Execution Host list
  • 22:42:49 - Successfully added instance 'i-fc096e93' to SGE
  • 22:42:49 - Waiting on worker instance 'i-fc096e93' to configure itself...
  • 22:43:13 - Instance 'i-fc096e93' ready
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory


___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

 http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

 http://lists.bx.psu.edu/




___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

  http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

  http://lists.bx.psu.edu/
Reply | Threaded
Open this post in threaded view
|

Re: Help!!!!!! with Galaxy Cloud!!!!!

vasu punj
I was wondering if there are instructions how can I run the Galaxy on CloudConsole, Indeed  first I want to know how Galaxy is established on console? Can someone direct me to the instructions please.
 
Thanks.
  
--- On Tue, 4/12/11, Enis Afgan <[hidden email]> wrote:

From: Enis Afgan <[hidden email]>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "Mike Dufault" <[hidden email]>
Cc: [hidden email]
Date: Tuesday, April 12, 2011, 9:31 PM

Galaxy has the functionality to recover any jobs that were running after it's restarted so it is quite possible to for the job to still be running. In addition, from the cloudman console, it appears that at least one instance is pretty heavily loaded so that can also mean that the job is still running. However, without actually accessing the instance through the command line and checking the status of the job queue, it is not possible to tell if the job is - actually running. Do you know how to do that? It's just a few commands in the terminal:
- access the instance
[local]$ ssh -i <path to the private key you downloaded from AWS when you created a key pair> ubuntu@<instance public DNS>
- become galaxy user
[ec2]$ sudo su galaxy 
- list any running jobs
[ec2]$ qstat

If that command returns a list of jobs and the jobs are in stare 'r' (running), the job is still running; otherwise, no.

Let me know how it goes,
Enis

On Tue, Apr 12, 2011 at 9:49 PM, Mike Dufault <dufaultm@...> wrote:
Hi Enis,

THANK YOU!!!

I see that my "filter pileup on data" step is running. Is this the same analysis that was running before or did it relauch when you restarted Galaxy? I just don't know if the analysis would be compromised.

Thanks again to you and the whole Galaxy team.

Best,

Mike

--- On Tue, 4/12/11, Enis Afgan <eafgan@...> wrote:

From: Enis Afgan <eafgan@...>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "Mike Dufault" <dufaultm@...>
Cc: "Anton Nekrutenko" <anton@...>, galaxy-user@...
Date: Tuesday, April 12, 2011, 9:16 PM


Ahh, for some reason cloudman is thinking Galaxy is not 'running' but still 'starting' and has thus not enabled the given button. To access the analysis, in your browser, just delete the '/cloud' part of the URL and that should load Galaxy.

Sorry about the confusion,
Enis

On Tue, Apr 12, 2011 at 9:12 PM, Mike Dufault <dufaultm@...> wrote:
Hi Enis,

Thanks for looking into this.

From the Galaxy Cloudman Console, I can see that it was restarted from the log (thanks), but the "Access Galaxy" choice is still grayed out and I don't know how to access the Analysis window.

Is there a way back into my analysis?

Thanks,
Mike



--- On Tue, 4/12/11, Enis Afgan <eafgan@...> wrote:

From: Enis Afgan <eafgan@...>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "Mike Dufault" <dufaultm@...>
Cc: "Anton Nekrutenko" <anton@...>, galaxy-user@...
Date: Tuesday, April 12, 2011, 8:55 PM


Hi Mike, 
Try accessing your Galaxy instance now. It should be ok.

The link in your email contained the IP for your instance so I took the liberty of restarting Galaxy and that brought it back up. There seems to have been an issue with Galaxy accessing its database and that resulted in Galaxy crashing. We'll look into why that happened in the first place but should be ok now. 

Let me know if you have any more trouble,
Enis

On Tue, Apr 12, 2011 at 2:49 PM, Mike Dufault <dufaultm@...> wrote:
Hello Galaxy Staff,

My data has been running on the Amazon EC2 for just over 24hrs. I have not closed any windows and my Exome analysis made it all the way through to filter on Pile up. I have two tabs for this instance. One is the Galaxy Cloudman Console and the other is the tab where I perform the analysis, load data, history etc.

Anyway, I went to add a step to the work flow and the screen "Welcome Galaxy to the Cloud" screen along with the information "There is no Galaxy instance running on this host, or the Galaxy instance is not responding. To manage Galaxy on this host, please use the Cloud Console."

What happened???

When I go back to the Galaxy Cloudman Console, it shows that my instance is still running along with the four cores, the Cluster log is below. AWS also shows that my instance is running.

Will the work flow finish? Can I get my data? How?

I tried to re-access the analysis page by selecting "Access Galaxy" from the "Galaxy Cloudman Console" but it sends me to the same "Welcome page."

Is there a way to get back into the analysis page?

Please help!!!

Thanks,
Mike

The cluster log shows:
  • 13:05:24 - Master starting
  • 13:05:25 - Completed initial cluster configuration.
  • 13:05:33 - Starting service 'SGE'
  • 13:05:48 - Configuring SGE...
  • 13:05:56 - Successfully setup SGE; configuring SGE
  • 13:05:57 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Saved file 'cm_boot.py' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Saved file 'cm.tar.gz' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Problem connecting to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3', attempt 1/5
  • 13:05:59 - Saved file 'Fam122261.clusterName' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:06:24 - Initializing a 'Galaxy' cluster.
  • 13:06:24 - Retrieved file 'snaps.yaml' from bucket 'cloudman' to 'cm_snaps.yaml'.
  • 13:06:41 - Adding 3 instance(s)...
  • 13:07:02 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'universe_wsgi.ini.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'tool_conf.xml.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:48 - Error mounting file system '/mnt/galaxyData' from '/dev/sdg3', running command '/bin/mount /dev/sdg3 /mnt/galaxyData' returned code '32' and following stderr: 'mount: you must specify the filesystem type '
  • 13:07:52 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:52 - Starting service 'Postgres'
  • 13:07:52 - PostgreSQL data directory '/mnt/galaxyData/pgsql/data' does not exist (yet?)
  • 13:07:52 - Configuring PostgreSQL with a database for Galaxy...
  • 13:08:05 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:05 - Starting service 'Galaxy'
  • 13:08:05 - Galaxy daemon not running.
  • 13:08:05 - Galaxy service state changed from 'Starting' to 'Error'
  • 13:08:05 - Setting up Galaxy application
  • 13:08:05 - Retrieved file 'universe_wsgi.ini.cloud' from bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3' to '/mnt/galaxyTools/galaxy-central/universe_wsgi.ini'.
  • 13:08:05 - Retrieved file 'tool_conf.xml.cloud' from bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3' to '/mnt/galaxyTools/galaxy-central/tool_conf.xml'.
  • 13:08:05 - Retrieved file 'tool_data_table_conf.xml.cloud' from bucket 'cloudman' to '/mnt/galaxyTools/galaxy-central/tool_data_table_conf.xml.cloud'.
  • 13:08:05 - Starting Galaxy...
  • 13:08:09 - Galaxy service state changed from 'Error' to 'Starting'
  • 13:08:09 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:09 - Saved file 'tool_data_table_conf.xml.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:28 - Instance 'i-e46f0a8b' reported alive
  • 13:08:28 - Successfully generated root user's public key.
  • 13:08:28 - Sent master public key to worker instance 'i-e46f0a8b'.
  • 13:08:28 - Instance 'i-e26f0a8d' reported alive
  • 13:08:28 - Sent master public key to worker instance 'i-e26f0a8d'.
  • 13:08:33 - Instance 'i-e06f0a8f' reported alive
  • 13:08:33 - Sent master public key to worker instance 'i-e06f0a8f'.
  • 13:08:33 - Adding instance i-e46f0a8b to SGE Execution Host list
  • 13:08:44 - Successfully added instance 'i-e46f0a8b' to SGE
  • 13:08:44 - Waiting on worker instance 'i-e46f0a8b' to configure itself...
  • 13:08:44 - Instance 'i-e26f0a8d' already in SGE's @allhosts
  • 13:08:44 - Waiting on worker instance 'i-e26f0a8d' to configure itself...
  • 13:08:45 - Instance 'i-e06f0a8f' already in SGE's @allhosts
  • 13:08:45 - Waiting on worker instance 'i-e06f0a8f' to configure itself...
  • 13:08:50 - Instance 'i-e46f0a8b' ready
  • 13:09:27 - Galaxy service state changed from 'Starting' to 'Running'
  • 22:38:18 - Found '3' idle instances; trying to remove '2'
  • 22:38:18 - Specific termination of instance 'i-e26f0a8d' requested.
  • 22:38:18 - Removing instance 'i-e26f0a8d' from SGE
  • 22:38:18 - Successfully updated @allhosts to remove 'i-e26f0a8d'
  • 22:38:19 - Terminating instance 'i-e26f0a8d'
  • 22:38:19 - Initiated requested termination of instance. Terminating 'i-e26f0a8d'.
  • 22:38:19 - Specific termination of instance 'i-e46f0a8b' requested.
  • 22:38:19 - Removing instance 'i-e46f0a8b' from SGE
  • 22:38:19 - Successfully initiated termination of instance 'i-e26f0a8d'
  • 22:38:19 - Successfully updated @allhosts to remove 'i-e46f0a8b'
  • 22:38:20 - Terminating instance 'i-e46f0a8b'
  • 22:38:20 - Initiated requested termination of instance. Terminating 'i-e46f0a8b'.
  • 22:38:20 - Initiated requested termination of instances. Terminating '2' instances.
  • 22:38:20 - Successfully initiated termination of instance 'i-e46f0a8b'
  • 22:38:41 - Found '1' idle instances; trying to remove '1'
  • 22:38:41 - Specific termination of instance 'i-e06f0a8f' requested.
  • 22:38:41 - Removing instance 'i-e06f0a8f' from SGE
  • 22:38:41 - Successfully updated @allhosts to remove 'i-e06f0a8f'
  • 22:38:42 - Initiated requested termination of instance. Terminating 'i-e06f0a8f'.
  • 22:38:42 - Initiated requested termination of instances. Terminating '1' instances.
  • 22:38:42 - Terminating instance 'i-e06f0a8f'
  • 22:38:42 - Successfully initiated termination of instance 'i-e06f0a8f'
  • 22:38:47 - Instance 'i-e26f0a8d' successfully terminated.
  • 22:38:49 - Instance 'i-e46f0a8b' successfully terminated.
  • 22:38:59 - Adding 3 instance(s)...
  • 22:39:07 - Instance 'i-e06f0a8f' successfully terminated.
  • 22:41:02 - Instance 'i-fa096e95' reported alive
  • 22:41:02 - Sent master public key to worker instance 'i-fa096e95'.
  • 22:41:06 - Adding instance i-fa096e95 to SGE Execution Host list
  • 22:41:17 - Successfully added instance 'i-fa096e95' to SGE
  • 22:41:17 - Waiting on worker instance 'i-fa096e95' to configure itself...
  • 22:41:17 - Instance 'i-fe096e91' reported alive
  • 22:41:17 - Sent master public key to worker instance 'i-fe096e91'.
  • 22:41:22 - Adding instance i-fe096e91 to SGE Execution Host list
  • 22:41:34 - Successfully added instance 'i-fe096e91' to SGE
  • 22:41:34 - Waiting on worker instance 'i-fe096e91' to configure itself...
  • 22:41:34 - Instance 'i-fa096e95' ready
  • 22:41:52 - Instance 'i-fe096e91' ready
  • 22:42:28 - Instance 'i-fc096e93' reported alive
  • 22:42:28 - Sent master public key to worker instance 'i-fc096e93'.
  • 22:42:38 - Adding instance i-fc096e93 to SGE Execution Host list
  • 22:42:49 - Successfully added instance 'i-fc096e93' to SGE
  • 22:42:49 - Waiting on worker instance 'i-fc096e93' to configure itself...
  • 22:43:13 - Instance 'i-fc096e93' ready
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory


___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

 http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

 http://lists.bx.psu.edu/




-----Inline Attachment Follows-----

___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

  http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

  http://lists.bx.psu.edu/

___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

  http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

  http://lists.bx.psu.edu/
Reply | Threaded
Open this post in threaded view
|

Re: Help!!!!!! with Galaxy Cloud!!!!!

Enis Afgan-2
Hi Vasu, 
I am not sure I understand your question but the general instructions on how to get started and use Galaxy on the cloud (i.e., Cloudman) are available at usegalaxy.org/cloud

Let us know if you that page does not answer your questions,
Enis

On Wed, Apr 13, 2011 at 9:40 AM, vasu punj <[hidden email]> wrote:
I was wondering if there are instructions how can I run the Galaxy on CloudConsole, Indeed  first I want to know how Galaxy is established on console? Can someone direct me to the instructions please.
 
Thanks.
  
--- On Tue, 4/12/11, Enis Afgan <[hidden email]> wrote:

From: Enis Afgan <[hidden email]>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "Mike Dufault" <[hidden email]>
Cc: [hidden email]
Date: Tuesday, April 12, 2011, 9:31 PM


Galaxy has the functionality to recover any jobs that were running after it's restarted so it is quite possible to for the job to still be running. In addition, from the cloudman console, it appears that at least one instance is pretty heavily loaded so that can also mean that the job is still running. However, without actually accessing the instance through the command line and checking the status of the job queue, it is not possible to tell if the job is - actually running. Do you know how to do that? It's just a few commands in the terminal:
- access the instance
[local]$ ssh -i <path to the private key you downloaded from AWS when you created a key pair> ubuntu@<instance public DNS>
- become galaxy user
[ec2]$ sudo su galaxy 
- list any running jobs
[ec2]$ qstat

If that command returns a list of jobs and the jobs are in stare 'r' (running), the job is still running; otherwise, no.

Let me know how it goes,
Enis

On Tue, Apr 12, 2011 at 9:49 PM, Mike Dufault <dufaultm@...> wrote:
Hi Enis,

THANK YOU!!!

I see that my "filter pileup on data" step is running. Is this the same analysis that was running before or did it relauch when you restarted Galaxy? I just don't know if the analysis would be compromised.

Thanks again to you and the whole Galaxy team.

Best,

Mike

--- On Tue, 4/12/11, Enis Afgan <eafgan@...> wrote:

From: Enis Afgan <eafgan@...>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "Mike Dufault" <dufaultm@...>
Cc: "Anton Nekrutenko" <anton@...>, galaxy-user@...
Date: Tuesday, April 12, 2011, 9:16 PM


Ahh, for some reason cloudman is thinking Galaxy is not 'running' but still 'starting' and has thus not enabled the given button. To access the analysis, in your browser, just delete the '/cloud' part of the URL and that should load Galaxy.

Sorry about the confusion,
Enis

On Tue, Apr 12, 2011 at 9:12 PM, Mike Dufault <dufaultm@...> wrote:
Hi Enis,

Thanks for looking into this.

From the Galaxy Cloudman Console, I can see that it was restarted from the log (thanks), but the "Access Galaxy" choice is still grayed out and I don't know how to access the Analysis window.

Is there a way back into my analysis?

Thanks,
Mike



--- On Tue, 4/12/11, Enis Afgan <eafgan@...> wrote:

From: Enis Afgan <eafgan@...>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "Mike Dufault" <dufaultm@...>
Cc: "Anton Nekrutenko" <anton@...>, galaxy-user@...
Date: Tuesday, April 12, 2011, 8:55 PM


Hi Mike, 
Try accessing your Galaxy instance now. It should be ok.

The link in your email contained the IP for your instance so I took the liberty of restarting Galaxy and that brought it back up. There seems to have been an issue with Galaxy accessing its database and that resulted in Galaxy crashing. We'll look into why that happened in the first place but should be ok now. 

Let me know if you have any more trouble,
Enis

On Tue, Apr 12, 2011 at 2:49 PM, Mike Dufault <dufaultm@...> wrote:
Hello Galaxy Staff,

My data has been running on the Amazon EC2 for just over 24hrs. I have not closed any windows and my Exome analysis made it all the way through to filter on Pile up. I have two tabs for this instance. One is the Galaxy Cloudman Console and the other is the tab where I perform the analysis, load data, history etc.

Anyway, I went to add a step to the work flow and the screen "Welcome Galaxy to the Cloud" screen along with the information "There is no Galaxy instance running on this host, or the Galaxy instance is not responding. To manage Galaxy on this host, please use the Cloud Console."

What happened???

When I go back to the Galaxy Cloudman Console, it shows that my instance is still running along with the four cores, the Cluster log is below. AWS also shows that my instance is running.

Will the work flow finish? Can I get my data? How?

I tried to re-access the analysis page by selecting "Access Galaxy" from the "Galaxy Cloudman Console" but it sends me to the same "Welcome page."

Is there a way to get back into the analysis page?

Please help!!!

Thanks,
Mike

The cluster log shows:
  • 13:05:24 - Master starting
  • 13:05:25 - Completed initial cluster configuration.
  • 13:05:33 - Starting service 'SGE'
  • 13:05:48 - Configuring SGE...
  • 13:05:56 - Successfully setup SGE; configuring SGE
  • 13:05:57 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Saved file 'cm_boot.py' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Saved file 'cm.tar.gz' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Problem connecting to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3', attempt 1/5
  • 13:05:59 - Saved file 'Fam122261.clusterName' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:06:24 - Initializing a 'Galaxy' cluster.
  • 13:06:24 - Retrieved file 'snaps.yaml' from bucket 'cloudman' to 'cm_snaps.yaml'.
  • 13:06:41 - Adding 3 instance(s)...
  • 13:07:02 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'universe_wsgi.ini.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'tool_conf.xml.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:48 - Error mounting file system '/mnt/galaxyData' from '/dev/sdg3', running command '/bin/mount /dev/sdg3 /mnt/galaxyData' returned code '32' and following stderr: 'mount: you must specify the filesystem type '
  • 13:07:52 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:52 - Starting service 'Postgres'
  • 13:07:52 - PostgreSQL data directory '/mnt/galaxyData/pgsql/data' does not exist (yet?)
  • 13:07:52 - Configuring PostgreSQL with a database for Galaxy...
  • 13:08:05 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:05 - Starting service 'Galaxy'
  • 13:08:05 - Galaxy daemon not running.
  • 13:08:05 - Galaxy service state changed from 'Starting' to 'Error'
  • 13:08:05 - Setting up Galaxy application
  • 13:08:05 - Retrieved file 'universe_wsgi.ini.cloud' from bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3' to '/mnt/galaxyTools/galaxy-central/universe_wsgi.ini'.
  • 13:08:05 - Retrieved file 'tool_conf.xml.cloud' from bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3' to '/mnt/galaxyTools/galaxy-central/tool_conf.xml'.
  • 13:08:05 - Retrieved file 'tool_data_table_conf.xml.cloud' from bucket 'cloudman' to '/mnt/galaxyTools/galaxy-central/tool_data_table_conf.xml.cloud'.
  • 13:08:05 - Starting Galaxy...
  • 13:08:09 - Galaxy service state changed from 'Error' to 'Starting'
  • 13:08:09 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:09 - Saved file 'tool_data_table_conf.xml.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:28 - Instance 'i-e46f0a8b' reported alive
  • 13:08:28 - Successfully generated root user's public key.
  • 13:08:28 - Sent master public key to worker instance 'i-e46f0a8b'.
  • 13:08:28 - Instance 'i-e26f0a8d' reported alive
  • 13:08:28 - Sent master public key to worker instance 'i-e26f0a8d'.
  • 13:08:33 - Instance 'i-e06f0a8f' reported alive
  • 13:08:33 - Sent master public key to worker instance 'i-e06f0a8f'.
  • 13:08:33 - Adding instance i-e46f0a8b to SGE Execution Host list
  • 13:08:44 - Successfully added instance 'i-e46f0a8b' to SGE
  • 13:08:44 - Waiting on worker instance 'i-e46f0a8b' to configure itself...
  • 13:08:44 - Instance 'i-e26f0a8d' already in SGE's @allhosts
  • 13:08:44 - Waiting on worker instance 'i-e26f0a8d' to configure itself...
  • 13:08:45 - Instance 'i-e06f0a8f' already in SGE's @allhosts
  • 13:08:45 - Waiting on worker instance 'i-e06f0a8f' to configure itself...
  • 13:08:50 - Instance 'i-e46f0a8b' ready
  • 13:09:27 - Galaxy service state changed from 'Starting' to 'Running'
  • 22:38:18 - Found '3' idle instances; trying to remove '2'
  • 22:38:18 - Specific termination of instance 'i-e26f0a8d' requested.
  • 22:38:18 - Removing instance 'i-e26f0a8d' from SGE
  • 22:38:18 - Successfully updated @allhosts to remove 'i-e26f0a8d'
  • 22:38:19 - Terminating instance 'i-e26f0a8d'
  • 22:38:19 - Initiated requested termination of instance. Terminating 'i-e26f0a8d'.
  • 22:38:19 - Specific termination of instance 'i-e46f0a8b' requested.
  • 22:38:19 - Removing instance 'i-e46f0a8b' from SGE
  • 22:38:19 - Successfully initiated termination of instance 'i-e26f0a8d'
  • 22:38:19 - Successfully updated @allhosts to remove 'i-e46f0a8b'
  • 22:38:20 - Terminating instance 'i-e46f0a8b'
  • 22:38:20 - Initiated requested termination of instance. Terminating 'i-e46f0a8b'.
  • 22:38:20 - Initiated requested termination of instances. Terminating '2' instances.
  • 22:38:20 - Successfully initiated termination of instance 'i-e46f0a8b'
  • 22:38:41 - Found '1' idle instances; trying to remove '1'
  • 22:38:41 - Specific termination of instance 'i-e06f0a8f' requested.
  • 22:38:41 - Removing instance 'i-e06f0a8f' from SGE
  • 22:38:41 - Successfully updated @allhosts to remove 'i-e06f0a8f'
  • 22:38:42 - Initiated requested termination of instance. Terminating 'i-e06f0a8f'.
  • 22:38:42 - Initiated requested termination of instances. Terminating '1' instances.
  • 22:38:42 - Terminating instance 'i-e06f0a8f'
  • 22:38:42 - Successfully initiated termination of instance 'i-e06f0a8f'
  • 22:38:47 - Instance 'i-e26f0a8d' successfully terminated.
  • 22:38:49 - Instance 'i-e46f0a8b' successfully terminated.
  • 22:38:59 - Adding 3 instance(s)...
  • 22:39:07 - Instance 'i-e06f0a8f' successfully terminated.
  • 22:41:02 - Instance 'i-fa096e95' reported alive
  • 22:41:02 - Sent master public key to worker instance 'i-fa096e95'.
  • 22:41:06 - Adding instance i-fa096e95 to SGE Execution Host list
  • 22:41:17 - Successfully added instance 'i-fa096e95' to SGE
  • 22:41:17 - Waiting on worker instance 'i-fa096e95' to configure itself...
  • 22:41:17 - Instance 'i-fe096e91' reported alive
  • 22:41:17 - Sent master public key to worker instance 'i-fe096e91'.
  • 22:41:22 - Adding instance i-fe096e91 to SGE Execution Host list
  • 22:41:34 - Successfully added instance 'i-fe096e91' to SGE
  • 22:41:34 - Waiting on worker instance 'i-fe096e91' to configure itself...
  • 22:41:34 - Instance 'i-fa096e95' ready
  • 22:41:52 - Instance 'i-fe096e91' ready
  • 22:42:28 - Instance 'i-fc096e93' reported alive
  • 22:42:28 - Sent master public key to worker instance 'i-fc096e93'.
  • 22:42:38 - Adding instance i-fc096e93 to SGE Execution Host list
  • 22:42:49 - Successfully added instance 'i-fc096e93' to SGE
  • 22:42:49 - Waiting on worker instance 'i-fc096e93' to configure itself...
  • 22:43:13 - Instance 'i-fc096e93' ready
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory


___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

 http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

 http://lists.bx.psu.edu/




-----Inline Attachment Follows-----


___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

  http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

  http://lists.bx.psu.edu/


___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

  http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

  http://lists.bx.psu.edu/
Reply | Threaded
Open this post in threaded view
|

Re: Help!!!!!! with Galaxy Cloud!!!!!

Mike Dufault

Hi Enis,
 
I started to use the terminal to check to see if the job was running, but it stopped successfully at the same time. Thanks again for helping me to complete the run.
 
Now I have an additional issue. I wanted to save my BAM file, but I kept getting an error. I think the error was because it was too large to send (4.1GB). So I saved what I could to my local HD and terminated the cluster. My EBS volume is 200GB and persisted after the cluster was terminated.
 
I assume that my BAM file resides somewhere in the EBS volume. I started a new Unix cluster and "attached" the EBS to that cluster. I also established an ssh to the Unis cluster but I do not know where to find the BAM file. Do you know how I can access the BAM file so that I can transfer it to my local HD?
 
Thanks,
Mike
--- On Wed, 4/13/11, Enis Afgan <[hidden email]> wrote:

From: Enis Afgan <[hidden email]>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "vasu punj" <[hidden email]>
Cc: [hidden email]
Date: Wednesday, April 13, 2011, 10:01 AM

Hi Vasu, 
I am not sure I understand your question but the general instructions on how to get started and use Galaxy on the cloud (i.e., Cloudman) are available at usegalaxy.org/cloud

Let us know if you that page does not answer your questions,
Enis

On Wed, Apr 13, 2011 at 9:40 AM, vasu punj <punjv@...> wrote:
I was wondering if there are instructions how can I run the Galaxy on CloudConsole, Indeed  first I want to know how Galaxy is established on console? Can someone direct me to the instructions please.
 
Thanks.
  
--- On Tue, 4/12/11, Enis Afgan <eafgan@...> wrote:

From: Enis Afgan <eafgan@...>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "Mike Dufault" <dufaultm@...>
Cc: galaxy-user@...
Date: Tuesday, April 12, 2011, 9:31 PM


Galaxy has the functionality to recover any jobs that were running after it's restarted so it is quite possible to for the job to still be running. In addition, from the cloudman console, it appears that at least one instance is pretty heavily loaded so that can also mean that the job is still running. However, without actually accessing the instance through the command line and checking the status of the job queue, it is not possible to tell if the job is - actually running. Do you know how to do that? It's just a few commands in the terminal:
- access the instance
[local]$ ssh -i <path to the private key you downloaded from AWS when you created a key pair> ubuntu@<instance public DNS>
- become galaxy user
[ec2]$ sudo su galaxy 
- list any running jobs
[ec2]$ qstat

If that command returns a list of jobs and the jobs are in stare 'r' (running), the job is still running; otherwise, no.

Let me know how it goes,
Enis

On Tue, Apr 12, 2011 at 9:49 PM, Mike Dufault <dufaultm@...> wrote:
Hi Enis,

THANK YOU!!!

I see that my "filter pileup on data" step is running. Is this the same analysis that was running before or did it relauch when you restarted Galaxy? I just don't know if the analysis would be compromised.

Thanks again to you and the whole Galaxy team.

Best,

Mike

--- On Tue, 4/12/11, Enis Afgan <eafgan@...> wrote:

From: Enis Afgan <eafgan@...>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "Mike Dufault" <dufaultm@...>
Cc: "Anton Nekrutenko" <anton@...>, galaxy-user@...
Date: Tuesday, April 12, 2011, 9:16 PM


Ahh, for some reason cloudman is thinking Galaxy is not 'running' but still 'starting' and has thus not enabled the given button. To access the analysis, in your browser, just delete the '/cloud' part of the URL and that should load Galaxy.

Sorry about the confusion,
Enis

On Tue, Apr 12, 2011 at 9:12 PM, Mike Dufault <dufaultm@...> wrote:
Hi Enis,

Thanks for looking into this.

From the Galaxy Cloudman Console, I can see that it was restarted from the log (thanks), but the "Access Galaxy" choice is still grayed out and I don't know how to access the Analysis window.

Is there a way back into my analysis?

Thanks,
Mike



--- On Tue, 4/12/11, Enis Afgan <eafgan@...> wrote:

From: Enis Afgan <eafgan@...>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "Mike Dufault" <dufaultm@...>
Cc: "Anton Nekrutenko" <anton@...>, galaxy-user@...
Date: Tuesday, April 12, 2011, 8:55 PM


Hi Mike, 
Try accessing your Galaxy instance now. It should be ok.

The link in your email contained the IP for your instance so I took the liberty of restarting Galaxy and that brought it back up. There seems to have been an issue with Galaxy accessing its database and that resulted in Galaxy crashing. We'll look into why that happened in the first place but should be ok now. 

Let me know if you have any more trouble,
Enis

On Tue, Apr 12, 2011 at 2:49 PM, Mike Dufault <dufaultm@...> wrote:
Hello Galaxy Staff,

My data has been running on the Amazon EC2 for just over 24hrs. I have not closed any windows and my Exome analysis made it all the way through to filter on Pile up. I have two tabs for this instance. One is the Galaxy Cloudman Console and the other is the tab where I perform the analysis, load data, history etc.

Anyway, I went to add a step to the work flow and the screen "Welcome Galaxy to the Cloud" screen along with the information "There is no Galaxy instance running on this host, or the Galaxy instance is not responding. To manage Galaxy on this host, please use the Cloud Console."

What happened???

When I go back to the Galaxy Cloudman Console, it shows that my instance is still running along with the four cores, the Cluster log is below. AWS also shows that my instance is running.

Will the work flow finish? Can I get my data? How?

I tried to re-access the analysis page by selecting "Access Galaxy" from the "Galaxy Cloudman Console" but it sends me to the same "Welcome page."

Is there a way to get back into the analysis page?

Please help!!!

Thanks,
Mike

The cluster log shows:
  • 13:05:24 - Master starting
  • 13:05:25 - Completed initial cluster configuration.
  • 13:05:33 - Starting service 'SGE'
  • 13:05:48 - Configuring SGE...
  • 13:05:56 - Successfully setup SGE; configuring SGE
  • 13:05:57 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Saved file 'cm_boot.py' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Saved file 'cm.tar.gz' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Problem connecting to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3', attempt 1/5
  • 13:05:59 - Saved file 'Fam122261.clusterName' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:06:24 - Initializing a 'Galaxy' cluster.
  • 13:06:24 - Retrieved file 'snaps.yaml' from bucket 'cloudman' to 'cm_snaps.yaml'.
  • 13:06:41 - Adding 3 instance(s)...
  • 13:07:02 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'universe_wsgi.ini.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'tool_conf.xml.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:48 - Error mounting file system '/mnt/galaxyData' from '/dev/sdg3', running command '/bin/mount /dev/sdg3 /mnt/galaxyData' returned code '32' and following stderr: 'mount: you must specify the filesystem type '
  • 13:07:52 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:52 - Starting service 'Postgres'
  • 13:07:52 - PostgreSQL data directory '/mnt/galaxyData/pgsql/data' does not exist (yet?)
  • 13:07:52 - Configuring PostgreSQL with a database for Galaxy...
  • 13:08:05 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:05 - Starting service 'Galaxy'
  • 13:08:05 - Galaxy daemon not running.
  • 13:08:05 - Galaxy service state changed from 'Starting' to 'Error'
  • 13:08:05 - Setting up Galaxy application
  • 13:08:05 - Retrieved file 'universe_wsgi.ini.cloud' from bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3' to '/mnt/galaxyTools/galaxy-central/universe_wsgi.ini'.
  • 13:08:05 - Retrieved file 'tool_conf.xml.cloud' from bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3' to '/mnt/galaxyTools/galaxy-central/tool_conf.xml'.
  • 13:08:05 - Retrieved file 'tool_data_table_conf.xml.cloud' from bucket 'cloudman' to '/mnt/galaxyTools/galaxy-central/tool_data_table_conf.xml.cloud'.
  • 13:08:05 - Starting Galaxy...
  • 13:08:09 - Galaxy service state changed from 'Error' to 'Starting'
  • 13:08:09 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:09 - Saved file 'tool_data_table_conf.xml.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:28 - Instance 'i-e46f0a8b' reported alive
  • 13:08:28 - Successfully generated root user's public key.
  • 13:08:28 - Sent master public key to worker instance 'i-e46f0a8b'.
  • 13:08:28 - Instance 'i-e26f0a8d' reported alive
  • 13:08:28 - Sent master public key to worker instance 'i-e26f0a8d'.
  • 13:08:33 - Instance 'i-e06f0a8f' reported alive
  • 13:08:33 - Sent master public key to worker instance 'i-e06f0a8f'.
  • 13:08:33 - Adding instance i-e46f0a8b to SGE Execution Host list
  • 13:08:44 - Successfully added instance 'i-e46f0a8b' to SGE
  • 13:08:44 - Waiting on worker instance 'i-e46f0a8b' to configure itself...
  • 13:08:44 - Instance 'i-e26f0a8d' already in SGE's @allhosts
  • 13:08:44 - Waiting on worker instance 'i-e26f0a8d' to configure itself...
  • 13:08:45 - Instance 'i-e06f0a8f' already in SGE's @allhosts
  • 13:08:45 - Waiting on worker instance 'i-e06f0a8f' to configure itself...
  • 13:08:50 - Instance 'i-e46f0a8b' ready
  • 13:09:27 - Galaxy service state changed from 'Starting' to 'Running'
  • 22:38:18 - Found '3' idle instances; trying to remove '2'
  • 22:38:18 - Specific termination of instance 'i-e26f0a8d' requested.
  • 22:38:18 - Removing instance 'i-e26f0a8d' from SGE
  • 22:38:18 - Successfully updated @allhosts to remove 'i-e26f0a8d'
  • 22:38:19 - Terminating instance 'i-e26f0a8d'
  • 22:38:19 - Initiated requested termination of instance. Terminating 'i-e26f0a8d'.
  • 22:38:19 - Specific termination of instance 'i-e46f0a8b' requested.
  • 22:38:19 - Removing instance 'i-e46f0a8b' from SGE
  • 22:38:19 - Successfully initiated termination of instance 'i-e26f0a8d'
  • 22:38:19 - Successfully updated @allhosts to remove 'i-e46f0a8b'
  • 22:38:20 - Terminating instance 'i-e46f0a8b'
  • 22:38:20 - Initiated requested termination of instance. Terminating 'i-e46f0a8b'.
  • 22:38:20 - Initiated requested termination of instances. Terminating '2' instances.
  • 22:38:20 - Successfully initiated termination of instance 'i-e46f0a8b'
  • 22:38:41 - Found '1' idle instances; trying to remove '1'
  • 22:38:41 - Specific termination of instance 'i-e06f0a8f' requested.
  • 22:38:41 - Removing instance 'i-e06f0a8f' from SGE
  • 22:38:41 - Successfully updated @allhosts to remove 'i-e06f0a8f'
  • 22:38:42 - Initiated requested termination of instance. Terminating 'i-e06f0a8f'.
  • 22:38:42 - Initiated requested termination of instances. Terminating '1' instances.
  • 22:38:42 - Terminating instance 'i-e06f0a8f'
  • 22:38:42 - Successfully initiated termination of instance 'i-e06f0a8f'
  • 22:38:47 - Instance 'i-e26f0a8d' successfully terminated.
  • 22:38:49 - Instance 'i-e46f0a8b' successfully terminated.
  • 22:38:59 - Adding 3 instance(s)...
  • 22:39:07 - Instance 'i-e06f0a8f' successfully terminated.
  • 22:41:02 - Instance 'i-fa096e95' reported alive
  • 22:41:02 - Sent master public key to worker instance 'i-fa096e95'.
  • 22:41:06 - Adding instance i-fa096e95 to SGE Execution Host list
  • 22:41:17 - Successfully added instance 'i-fa096e95' to SGE
  • 22:41:17 - Waiting on worker instance 'i-fa096e95' to configure itself...
  • 22:41:17 - Instance 'i-fe096e91' reported alive
  • 22:41:17 - Sent master public key to worker instance 'i-fe096e91'.
  • 22:41:22 - Adding instance i-fe096e91 to SGE Execution Host list
  • 22:41:34 - Successfully added instance 'i-fe096e91' to SGE
  • 22:41:34 - Waiting on worker instance 'i-fe096e91' to configure itself...
  • 22:41:34 - Instance 'i-fa096e95' ready
  • 22:41:52 - Instance 'i-fe096e91' ready
  • 22:42:28 - Instance 'i-fc096e93' reported alive
  • 22:42:28 - Sent master public key to worker instance 'i-fc096e93'.
  • 22:42:38 - Adding instance i-fc096e93 to SGE Execution Host list
  • 22:42:49 - Successfully added instance 'i-fc096e93' to SGE
  • 22:42:49 - Waiting on worker instance 'i-fc096e93' to configure itself...
  • 22:43:13 - Instance 'i-fc096e93' ready
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory


___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

 http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

 http://lists.bx.psu.edu/




-----Inline Attachment Follows-----


___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

  http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

  http://lists.bx.psu.edu/


-----Inline Attachment Follows-----

___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

  http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

  http://lists.bx.psu.edu/

___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

  http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

  http://lists.bx.psu.edu/
Reply | Threaded
Open this post in threaded view
|

Re: Help!!!!!! with Galaxy Cloud!!!!!

Enis Afgan-2
Hi Mike, 
Once the given EBS volume is attached and mounted, all of the data should be in /mnt/galaxyData/files/000/
This assumes the file system is mounted to /mnt/galaxyData, which is where it would get mounted to automatically by cloudman on cluster instantiation.

Enis

On Wed, Apr 13, 2011 at 9:00 PM, Mike Dufault <[hidden email]> wrote:

Hi Enis,
 
I started to use the terminal to check to see if the job was running, but it stopped successfully at the same time. Thanks again for helping me to complete the run.
 
Now I have an additional issue. I wanted to save my BAM file, but I kept getting an error. I think the error was because it was too large to send (4.1GB). So I saved what I could to my local HD and terminated the cluster. My EBS volume is 200GB and persisted after the cluster was terminated.
 
I assume that my BAM file resides somewhere in the EBS volume. I started a new Unix cluster and "attached" the EBS to that cluster. I also established an ssh to the Unis cluster but I do not know where to find the BAM file. Do you know how I can access the BAM file so that I can transfer it to my local HD?
 
Thanks,
Mike

--- On Wed, 4/13/11, Enis Afgan <[hidden email]> wrote:

From: Enis Afgan <[hidden email]>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "vasu punj" <[hidden email]> Date: Wednesday, April 13, 2011, 10:01 AM


Hi Vasu, 
I am not sure I understand your question but the general instructions on how to get started and use Galaxy on the cloud (i.e., Cloudman) are available at usegalaxy.org/cloud

Let us know if you that page does not answer your questions,
Enis

On Wed, Apr 13, 2011 at 9:40 AM, vasu punj <punjv@...> wrote:
I was wondering if there are instructions how can I run the Galaxy on CloudConsole, Indeed  first I want to know how Galaxy is established on console? Can someone direct me to the instructions please.
 
Thanks.
  
--- On Tue, 4/12/11, Enis Afgan <eafgan@...> wrote:

From: Enis Afgan <eafgan@...>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "Mike Dufault" <dufaultm@...>
Cc: galaxy-user@...
Date: Tuesday, April 12, 2011, 9:31 PM


Galaxy has the functionality to recover any jobs that were running after it's restarted so it is quite possible to for the job to still be running. In addition, from the cloudman console, it appears that at least one instance is pretty heavily loaded so that can also mean that the job is still running. However, without actually accessing the instance through the command line and checking the status of the job queue, it is not possible to tell if the job is - actually running. Do you know how to do that? It's just a few commands in the terminal:
- access the instance
[local]$ ssh -i <path to the private key you downloaded from AWS when you created a key pair> ubuntu@<instance public DNS>
- become galaxy user
[ec2]$ sudo su galaxy 
- list any running jobs
[ec2]$ qstat

If that command returns a list of jobs and the jobs are in stare 'r' (running), the job is still running; otherwise, no.

Let me know how it goes,
Enis

On Tue, Apr 12, 2011 at 9:49 PM, Mike Dufault <dufaultm@...> wrote:
Hi Enis,

THANK YOU!!!

I see that my "filter pileup on data" step is running. Is this the same analysis that was running before or did it relauch when you restarted Galaxy? I just don't know if the analysis would be compromised.

Thanks again to you and the whole Galaxy team.

Best,

Mike

--- On Tue, 4/12/11, Enis Afgan <eafgan@...> wrote:

From: Enis Afgan <eafgan@...>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "Mike Dufault" <dufaultm@...>
Cc: "Anton Nekrutenko" <anton@...>, galaxy-user@...
Date: Tuesday, April 12, 2011, 9:16 PM


Ahh, for some reason cloudman is thinking Galaxy is not 'running' but still 'starting' and has thus not enabled the given button. To access the analysis, in your browser, just delete the '/cloud' part of the URL and that should load Galaxy.

Sorry about the confusion,
Enis

On Tue, Apr 12, 2011 at 9:12 PM, Mike Dufault <dufaultm@...> wrote:
Hi Enis,

Thanks for looking into this.

From the Galaxy Cloudman Console, I can see that it was restarted from the log (thanks), but the "Access Galaxy" choice is still grayed out and I don't know how to access the Analysis window.

Is there a way back into my analysis?

Thanks,
Mike



--- On Tue, 4/12/11, Enis Afgan <eafgan@...> wrote:

From: Enis Afgan <eafgan@...>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "Mike Dufault" <dufaultm@...>
Cc: "Anton Nekrutenko" <anton@...>, galaxy-user@...
Date: Tuesday, April 12, 2011, 8:55 PM


Hi Mike, 
Try accessing your Galaxy instance now. It should be ok.

The link in your email contained the IP for your instance so I took the liberty of restarting Galaxy and that brought it back up. There seems to have been an issue with Galaxy accessing its database and that resulted in Galaxy crashing. We'll look into why that happened in the first place but should be ok now. 

Let me know if you have any more trouble,
Enis

On Tue, Apr 12, 2011 at 2:49 PM, Mike Dufault <dufaultm@...> wrote:
Hello Galaxy Staff,

My data has been running on the Amazon EC2 for just over 24hrs. I have not closed any windows and my Exome analysis made it all the way through to filter on Pile up. I have two tabs for this instance. One is the Galaxy Cloudman Console and the other is the tab where I perform the analysis, load data, history etc.

Anyway, I went to add a step to the work flow and the screen "Welcome Galaxy to the Cloud" screen along with the information "There is no Galaxy instance running on this host, or the Galaxy instance is not responding. To manage Galaxy on this host, please use the Cloud Console."

What happened???

When I go back to the Galaxy Cloudman Console, it shows that my instance is still running along with the four cores, the Cluster log is below. AWS also shows that my instance is running.

Will the work flow finish? Can I get my data? How?

I tried to re-access the analysis page by selecting "Access Galaxy" from the "Galaxy Cloudman Console" but it sends me to the same "Welcome page."

Is there a way to get back into the analysis page?

Please help!!!

Thanks,
Mike

The cluster log shows:
  • 13:05:24 - Master starting
  • 13:05:25 - Completed initial cluster configuration.
  • 13:05:33 - Starting service 'SGE'
  • 13:05:48 - Configuring SGE...
  • 13:05:56 - Successfully setup SGE; configuring SGE
  • 13:05:57 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Saved file 'cm_boot.py' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Saved file 'cm.tar.gz' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Problem connecting to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3', attempt 1/5
  • 13:05:59 - Saved file 'Fam122261.clusterName' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:06:24 - Initializing a 'Galaxy' cluster.
  • 13:06:24 - Retrieved file 'snaps.yaml' from bucket 'cloudman' to 'cm_snaps.yaml'.
  • 13:06:41 - Adding 3 instance(s)...
  • 13:07:02 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'universe_wsgi.ini.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'tool_conf.xml.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:48 - Error mounting file system '/mnt/galaxyData' from '/dev/sdg3', running command '/bin/mount /dev/sdg3 /mnt/galaxyData' returned code '32' and following stderr: 'mount: you must specify the filesystem type '
  • 13:07:52 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:52 - Starting service 'Postgres'
  • 13:07:52 - PostgreSQL data directory '/mnt/galaxyData/pgsql/data' does not exist (yet?)
  • 13:07:52 - Configuring PostgreSQL with a database for Galaxy...
  • 13:08:05 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:05 - Starting service 'Galaxy'
  • 13:08:05 - Galaxy daemon not running.
  • 13:08:05 - Galaxy service state changed from 'Starting' to 'Error'
  • 13:08:05 - Setting up Galaxy application
  • 13:08:05 - Retrieved file 'universe_wsgi.ini.cloud' from bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3' to '/mnt/galaxyTools/galaxy-central/universe_wsgi.ini'.
  • 13:08:05 - Retrieved file 'tool_conf.xml.cloud' from bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3' to '/mnt/galaxyTools/galaxy-central/tool_conf.xml'.
  • 13:08:05 - Retrieved file 'tool_data_table_conf.xml.cloud' from bucket 'cloudman' to '/mnt/galaxyTools/galaxy-central/tool_data_table_conf.xml.cloud'.
  • 13:08:05 - Starting Galaxy...
  • 13:08:09 - Galaxy service state changed from 'Error' to 'Starting'
  • 13:08:09 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:09 - Saved file 'tool_data_table_conf.xml.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:28 - Instance 'i-e46f0a8b' reported alive
  • 13:08:28 - Successfully generated root user's public key.
  • 13:08:28 - Sent master public key to worker instance 'i-e46f0a8b'.
  • 13:08:28 - Instance 'i-e26f0a8d' reported alive
  • 13:08:28 - Sent master public key to worker instance 'i-e26f0a8d'.
  • 13:08:33 - Instance 'i-e06f0a8f' reported alive
  • 13:08:33 - Sent master public key to worker instance 'i-e06f0a8f'.
  • 13:08:33 - Adding instance i-e46f0a8b to SGE Execution Host list
  • 13:08:44 - Successfully added instance 'i-e46f0a8b' to SGE
  • 13:08:44 - Waiting on worker instance 'i-e46f0a8b' to configure itself...
  • 13:08:44 - Instance 'i-e26f0a8d' already in SGE's @allhosts
  • 13:08:44 - Waiting on worker instance 'i-e26f0a8d' to configure itself...
  • 13:08:45 - Instance 'i-e06f0a8f' already in SGE's @allhosts
  • 13:08:45 - Waiting on worker instance 'i-e06f0a8f' to configure itself...
  • 13:08:50 - Instance 'i-e46f0a8b' ready
  • 13:09:27 - Galaxy service state changed from 'Starting' to 'Running'
  • 22:38:18 - Found '3' idle instances; trying to remove '2'
  • 22:38:18 - Specific termination of instance 'i-e26f0a8d' requested.
  • 22:38:18 - Removing instance 'i-e26f0a8d' from SGE
  • 22:38:18 - Successfully updated @allhosts to remove 'i-e26f0a8d'
  • 22:38:19 - Terminating instance 'i-e26f0a8d'
  • 22:38:19 - Initiated requested termination of instance. Terminating 'i-e26f0a8d'.
  • 22:38:19 - Specific termination of instance 'i-e46f0a8b' requested.
  • 22:38:19 - Removing instance 'i-e46f0a8b' from SGE
  • 22:38:19 - Successfully initiated termination of instance 'i-e26f0a8d'
  • 22:38:19 - Successfully updated @allhosts to remove 'i-e46f0a8b'
  • 22:38:20 - Terminating instance 'i-e46f0a8b'
  • 22:38:20 - Initiated requested termination of instance. Terminating 'i-e46f0a8b'.
  • 22:38:20 - Initiated requested termination of instances. Terminating '2' instances.
  • 22:38:20 - Successfully initiated termination of instance 'i-e46f0a8b'
  • 22:38:41 - Found '1' idle instances; trying to remove '1'
  • 22:38:41 - Specific termination of instance 'i-e06f0a8f' requested.
  • 22:38:41 - Removing instance 'i-e06f0a8f' from SGE
  • 22:38:41 - Successfully updated @allhosts to remove 'i-e06f0a8f'
  • 22:38:42 - Initiated requested termination of instance. Terminating 'i-e06f0a8f'.
  • 22:38:42 - Initiated requested termination of instances. Terminating '1' instances.
  • 22:38:42 - Terminating instance 'i-e06f0a8f'
  • 22:38:42 - Successfully initiated termination of instance 'i-e06f0a8f'
  • 22:38:47 - Instance 'i-e26f0a8d' successfully terminated.
  • 22:38:49 - Instance 'i-e46f0a8b' successfully terminated.
  • 22:38:59 - Adding 3 instance(s)...
  • 22:39:07 - Instance 'i-e06f0a8f' successfully terminated.
  • 22:41:02 - Instance 'i-fa096e95' reported alive
  • 22:41:02 - Sent master public key to worker instance 'i-fa096e95'.
  • 22:41:06 - Adding instance i-fa096e95 to SGE Execution Host list
  • 22:41:17 - Successfully added instance 'i-fa096e95' to SGE
  • 22:41:17 - Waiting on worker instance 'i-fa096e95' to configure itself...
  • 22:41:17 - Instance 'i-fe096e91' reported alive
  • 22:41:17 - Sent master public key to worker instance 'i-fe096e91'.
  • 22:41:22 - Adding instance i-fe096e91 to SGE Execution Host list
  • 22:41:34 - Successfully added instance 'i-fe096e91' to SGE
  • 22:41:34 - Waiting on worker instance 'i-fe096e91' to configure itself...
  • 22:41:34 - Instance 'i-fa096e95' ready
  • 22:41:52 - Instance 'i-fe096e91' ready
  • 22:42:28 - Instance 'i-fc096e93' reported alive
  • 22:42:28 - Sent master public key to worker instance 'i-fc096e93'.
  • 22:42:38 - Adding instance i-fc096e93 to SGE Execution Host list
  • 22:42:49 - Successfully added instance 'i-fc096e93' to SGE
  • 22:42:49 - Waiting on worker instance 'i-fc096e93' to configure itself...
  • 22:43:13 - Instance 'i-fc096e93' ready
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory


___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

 http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

 http://lists.bx.psu.edu/




-----Inline Attachment Follows-----


___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

  http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

  http://lists.bx.psu.edu/


-----Inline Attachment Follows-----

___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

  http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

  http://lists.bx.psu.edu/


___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

  http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

  http://lists.bx.psu.edu/
Reply | Threaded
Open this post in threaded view
|

Re: Help!!!!!! with Galaxy Cloud!!!!!

Mike Dufault
Hello again,

So I am able to see all of the .dat files in /mnt/galaxyData. What commands can I use to download a file to my HD? Also, what program should I use to open the .dat file?

Thanks again,
Mike

--- On Wed, 4/13/11, Enis Afgan <[hidden email]> wrote:

From: Enis Afgan <[hidden email]>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "Mike Dufault" <[hidden email]>
Cc: [hidden email]
Date: Wednesday, April 13, 2011, 11:15 PM

Hi Mike, 
Once the given EBS volume is attached and mounted, all of the data should be in /mnt/galaxyData/files/000/
This assumes the file system is mounted to /mnt/galaxyData, which is where it would get mounted to automatically by cloudman on cluster instantiation.

Enis

On Wed, Apr 13, 2011 at 9:00 PM, Mike Dufault <dufaultm@...> wrote:

Hi Enis,
 
I started to use the terminal to check to see if the job was running, but it stopped successfully at the same time. Thanks again for helping me to complete the run.
 
Now I have an additional issue. I wanted to save my BAM file, but I kept getting an error. I think the error was because it was too large to send (4.1GB). So I saved what I could to my local HD and terminated the cluster. My EBS volume is 200GB and persisted after the cluster was terminated.
 
I assume that my BAM file resides somewhere in the EBS volume. I started a new Unix cluster and "attached" the EBS to that cluster. I also established an ssh to the Unis cluster but I do not know where to find the BAM file. Do you know how I can access the BAM file so that I can transfer it to my local HD?
 
Thanks,
Mike

--- On Wed, 4/13/11, Enis Afgan <eafgan@...> wrote:

From: Enis Afgan <eafgan@...>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "vasu punj" <punjv@...> Date: Wednesday, April 13, 2011, 10:01 AM


Hi Vasu, 
I am not sure I understand your question but the general instructions on how to get started and use Galaxy on the cloud (i.e., Cloudman) are available at usegalaxy.org/cloud

Let us know if you that page does not answer your questions,
Enis

On Wed, Apr 13, 2011 at 9:40 AM, vasu punj <punjv@...> wrote:
I was wondering if there are instructions how can I run the Galaxy on CloudConsole, Indeed  first I want to know how Galaxy is established on console? Can someone direct me to the instructions please.
 
Thanks.
  
--- On Tue, 4/12/11, Enis Afgan <eafgan@...> wrote:

From: Enis Afgan <eafgan@...>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "Mike Dufault" <dufaultm@...>
Cc: galaxy-user@...
Date: Tuesday, April 12, 2011, 9:31 PM


Galaxy has the functionality to recover any jobs that were running after it's restarted so it is quite possible to for the job to still be running. In addition, from the cloudman console, it appears that at least one instance is pretty heavily loaded so that can also mean that the job is still running. However, without actually accessing the instance through the command line and checking the status of the job queue, it is not possible to tell if the job is - actually running. Do you know how to do that? It's just a few commands in the terminal:
- access the instance
[local]$ ssh -i <path to the private key you downloaded from AWS when you created a key pair> ubuntu@<instance public DNS>
- become galaxy user
[ec2]$ sudo su galaxy 
- list any running jobs
[ec2]$ qstat

If that command returns a list of jobs and the jobs are in stare 'r' (running), the job is still running; otherwise, no.

Let me know how it goes,
Enis

On Tue, Apr 12, 2011 at 9:49 PM, Mike Dufault <dufaultm@...> wrote:
Hi Enis,

THANK YOU!!!

I see that my "filter pileup on data" step is running. Is this the same analysis that was running before or did it relauch when you restarted Galaxy? I just don't know if the analysis would be compromised.

Thanks again to you and the whole Galaxy team.

Best,

Mike

--- On Tue, 4/12/11, Enis Afgan <eafgan@...> wrote:

From: Enis Afgan <eafgan@...>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "Mike Dufault" <dufaultm@...>
Cc: "Anton Nekrutenko" <anton@...>, galaxy-user@...
Date: Tuesday, April 12, 2011, 9:16 PM


Ahh, for some reason cloudman is thinking Galaxy is not 'running' but still 'starting' and has thus not enabled the given button. To access the analysis, in your browser, just delete the '/cloud' part of the URL and that should load Galaxy.

Sorry about the confusion,
Enis

On Tue, Apr 12, 2011 at 9:12 PM, Mike Dufault <dufaultm@...> wrote:
Hi Enis,

Thanks for looking into this.

From the Galaxy Cloudman Console, I can see that it was restarted from the log (thanks), but the "Access Galaxy" choice is still grayed out and I don't know how to access the Analysis window.

Is there a way back into my analysis?

Thanks,
Mike



--- On Tue, 4/12/11, Enis Afgan <eafgan@...> wrote:

From: Enis Afgan <eafgan@...>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "Mike Dufault" <dufaultm@...>
Cc: "Anton Nekrutenko" <anton@...>, galaxy-user@...
Date: Tuesday, April 12, 2011, 8:55 PM


Hi Mike, 
Try accessing your Galaxy instance now. It should be ok.

The link in your email contained the IP for your instance so I took the liberty of restarting Galaxy and that brought it back up. There seems to have been an issue with Galaxy accessing its database and that resulted in Galaxy crashing. We'll look into why that happened in the first place but should be ok now. 

Let me know if you have any more trouble,
Enis

On Tue, Apr 12, 2011 at 2:49 PM, Mike Dufault <dufaultm@...> wrote:
Hello Galaxy Staff,

My data has been running on the Amazon EC2 for just over 24hrs. I have not closed any windows and my Exome analysis made it all the way through to filter on Pile up. I have two tabs for this instance. One is the Galaxy Cloudman Console and the other is the tab where I perform the analysis, load data, history etc.

Anyway, I went to add a step to the work flow and the screen "Welcome Galaxy to the Cloud" screen along with the information "There is no Galaxy instance running on this host, or the Galaxy instance is not responding. To manage Galaxy on this host, please use the Cloud Console."

What happened???

When I go back to the Galaxy Cloudman Console, it shows that my instance is still running along with the four cores, the Cluster log is below. AWS also shows that my instance is running.

Will the work flow finish? Can I get my data? How?

I tried to re-access the analysis page by selecting "Access Galaxy" from the "Galaxy Cloudman Console" but it sends me to the same "Welcome page."

Is there a way to get back into the analysis page?

Please help!!!

Thanks,
Mike

The cluster log shows:
  • 13:05:24 - Master starting
  • 13:05:25 - Completed initial cluster configuration.
  • 13:05:33 - Starting service 'SGE'
  • 13:05:48 - Configuring SGE...
  • 13:05:56 - Successfully setup SGE; configuring SGE
  • 13:05:57 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Saved file 'cm_boot.py' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Saved file 'cm.tar.gz' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Problem connecting to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3', attempt 1/5
  • 13:05:59 - Saved file 'Fam122261.clusterName' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:06:24 - Initializing a 'Galaxy' cluster.
  • 13:06:24 - Retrieved file 'snaps.yaml' from bucket 'cloudman' to 'cm_snaps.yaml'.
  • 13:06:41 - Adding 3 instance(s)...
  • 13:07:02 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'universe_wsgi.ini.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'tool_conf.xml.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:48 - Error mounting file system '/mnt/galaxyData' from '/dev/sdg3', running command '/bin/mount /dev/sdg3 /mnt/galaxyData' returned code '32' and following stderr: 'mount: you must specify the filesystem type '
  • 13:07:52 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:52 - Starting service 'Postgres'
  • 13:07:52 - PostgreSQL data directory '/mnt/galaxyData/pgsql/data' does not exist (yet?)
  • 13:07:52 - Configuring PostgreSQL with a database for Galaxy...
  • 13:08:05 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:05 - Starting service 'Galaxy'
  • 13:08:05 - Galaxy daemon not running.
  • 13:08:05 - Galaxy service state changed from 'Starting' to 'Error'
  • 13:08:05 - Setting up Galaxy application
  • 13:08:05 - Retrieved file 'universe_wsgi.ini.cloud' from bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3' to '/mnt/galaxyTools/galaxy-central/universe_wsgi.ini'.
  • 13:08:05 - Retrieved file 'tool_conf.xml.cloud' from bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3' to '/mnt/galaxyTools/galaxy-central/tool_conf.xml'.
  • 13:08:05 - Retrieved file 'tool_data_table_conf.xml.cloud' from bucket 'cloudman' to '/mnt/galaxyTools/galaxy-central/tool_data_table_conf.xml.cloud'.
  • 13:08:05 - Starting Galaxy...
  • 13:08:09 - Galaxy service state changed from 'Error' to 'Starting'
  • 13:08:09 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:09 - Saved file 'tool_data_table_conf.xml.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:28 - Instance 'i-e46f0a8b' reported alive
  • 13:08:28 - Successfully generated root user's public key.
  • 13:08:28 - Sent master public key to worker instance 'i-e46f0a8b'.
  • 13:08:28 - Instance 'i-e26f0a8d' reported alive
  • 13:08:28 - Sent master public key to worker instance 'i-e26f0a8d'.
  • 13:08:33 - Instance 'i-e06f0a8f' reported alive
  • 13:08:33 - Sent master public key to worker instance 'i-e06f0a8f'.
  • 13:08:33 - Adding instance i-e46f0a8b to SGE Execution Host list
  • 13:08:44 - Successfully added instance 'i-e46f0a8b' to SGE
  • 13:08:44 - Waiting on worker instance 'i-e46f0a8b' to configure itself...
  • 13:08:44 - Instance 'i-e26f0a8d' already in SGE's @allhosts
  • 13:08:44 - Waiting on worker instance 'i-e26f0a8d' to configure itself...
  • 13:08:45 - Instance 'i-e06f0a8f' already in SGE's @allhosts
  • 13:08:45 - Waiting on worker instance 'i-e06f0a8f' to configure itself...
  • 13:08:50 - Instance 'i-e46f0a8b' ready
  • 13:09:27 - Galaxy service state changed from 'Starting' to 'Running'
  • 22:38:18 - Found '3' idle instances; trying to remove '2'
  • 22:38:18 - Specific termination of instance 'i-e26f0a8d' requested.
  • 22:38:18 - Removing instance 'i-e26f0a8d' from SGE
  • 22:38:18 - Successfully updated @allhosts to remove 'i-e26f0a8d'
  • 22:38:19 - Terminating instance 'i-e26f0a8d'
  • 22:38:19 - Initiated requested termination of instance. Terminating 'i-e26f0a8d'.
  • 22:38:19 - Specific termination of instance 'i-e46f0a8b' requested.
  • 22:38:19 - Removing instance 'i-e46f0a8b' from SGE
  • 22:38:19 - Successfully initiated termination of instance 'i-e26f0a8d'
  • 22:38:19 - Successfully updated @allhosts to remove 'i-e46f0a8b'
  • 22:38:20 - Terminating instance 'i-e46f0a8b'
  • 22:38:20 - Initiated requested termination of instance. Terminating 'i-e46f0a8b'.
  • 22:38:20 - Initiated requested termination of instances. Terminating '2' instances.
  • 22:38:20 - Successfully initiated termination of instance 'i-e46f0a8b'
  • 22:38:41 - Found '1' idle instances; trying to remove '1'
  • 22:38:41 - Specific termination of instance 'i-e06f0a8f' requested.
  • 22:38:41 - Removing instance 'i-e06f0a8f' from SGE
  • 22:38:41 - Successfully updated @allhosts to remove 'i-e06f0a8f'
  • 22:38:42 - Initiated requested termination of instance. Terminating 'i-e06f0a8f'.
  • 22:38:42 - Initiated requested termination of instances. Terminating '1' instances.
  • 22:38:42 - Terminating instance 'i-e06f0a8f'
  • 22:38:42 - Successfully initiated termination of instance 'i-e06f0a8f'
  • 22:38:47 - Instance 'i-e26f0a8d' successfully terminated.
  • 22:38:49 - Instance 'i-e46f0a8b' successfully terminated.
  • 22:38:59 - Adding 3 instance(s)...
  • 22:39:07 - Instance 'i-e06f0a8f' successfully terminated.
  • 22:41:02 - Instance 'i-fa096e95' reported alive
  • 22:41:02 - Sent master public key to worker instance 'i-fa096e95'.
  • 22:41:06 - Adding instance i-fa096e95 to SGE Execution Host list
  • 22:41:17 - Successfully added instance 'i-fa096e95' to SGE
  • 22:41:17 - Waiting on worker instance 'i-fa096e95' to configure itself...
  • 22:41:17 - Instance 'i-fe096e91' reported alive
  • 22:41:17 - Sent master public key to worker instance 'i-fe096e91'.
  • 22:41:22 - Adding instance i-fe096e91 to SGE Execution Host list
  • 22:41:34 - Successfully added instance 'i-fe096e91' to SGE
  • 22:41:34 - Waiting on worker instance 'i-fe096e91' to configure itself...
  • 22:41:34 - Instance 'i-fa096e95' ready
  • 22:41:52 - Instance 'i-fe096e91' ready
  • 22:42:28 - Instance 'i-fc096e93' reported alive
  • 22:42:28 - Sent master public key to worker instance 'i-fc096e93'.
  • 22:42:38 - Adding instance i-fc096e93 to SGE Execution Host list
  • 22:42:49 - Successfully added instance 'i-fc096e93' to SGE
  • 22:42:49 - Waiting on worker instance 'i-fc096e93' to configure itself...
  • 22:43:13 - Instance 'i-fc096e93' ready
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory


___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

 http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

 http://lists.bx.psu.edu/




-----Inline Attachment Follows-----


___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

  http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

  http://lists.bx.psu.edu/


-----Inline Attachment Follows-----

___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

  http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

  http://lists.bx.psu.edu/


___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

  http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

  http://lists.bx.psu.edu/
Reply | Threaded
Open this post in threaded view
|

Re: Help!!!!!! with Galaxy Cloud!!!!!

Enis Afgan-2
Hi Mike, 
You should be able to download the desired file(s) directly from Galaxy by expanding the desired history item and then clicking the 'disk' (i.e., download) icon. 

Alternatively, if you want to copy the file by hand directly from the file system on the instance, the following is the command to execute (note that the command is executed from your local machine):
scp -i <path to your AWS key pair file> ubuntu@<instance public DNS>:/mnt/galaxyData/files/000/<file name> .
(also, note the '.' at the end of the command) 
This command will copy the remote file to your local machine and it will put it in your current directory.

Once downloaded, the file can probably be opened with any text editor (unless it's a binary file, in which case it will have to be opened with the appropriate tool that can read the given file format).

Enis

On Fri, Apr 15, 2011 at 12:32 AM, Mike Dufault <[hidden email]> wrote:
Hello again,

So I am able to see all of the .dat files in /mnt/galaxyData. What commands can I use to download a file to my HD? Also, what program should I use to open the .dat file?

Thanks again,

Mike

--- On Wed, 4/13/11, Enis Afgan <[hidden email]> wrote:

From: Enis Afgan <[hidden email]>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "Mike Dufault" <[hidden email]>
Cc: [hidden email]
Date: Wednesday, April 13, 2011, 11:15 PM


Hi Mike, 
Once the given EBS volume is attached and mounted, all of the data should be in /mnt/galaxyData/files/000/
This assumes the file system is mounted to /mnt/galaxyData, which is where it would get mounted to automatically by cloudman on cluster instantiation.

Enis

On Wed, Apr 13, 2011 at 9:00 PM, Mike Dufault <dufaultm@...> wrote:

Hi Enis,
 
I started to use the terminal to check to see if the job was running, but it stopped successfully at the same time. Thanks again for helping me to complete the run.
 
Now I have an additional issue. I wanted to save my BAM file, but I kept getting an error. I think the error was because it was too large to send (4.1GB). So I saved what I could to my local HD and terminated the cluster. My EBS volume is 200GB and persisted after the cluster was terminated.
 
I assume that my BAM file resides somewhere in the EBS volume. I started a new Unix cluster and "attached" the EBS to that cluster. I also established an ssh to the Unis cluster but I do not know where to find the BAM file. Do you know how I can access the BAM file so that I can transfer it to my local HD?
 
Thanks,
Mike

--- On Wed, 4/13/11, Enis Afgan <eafgan@...> wrote:

From: Enis Afgan <eafgan@...>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "vasu punj" <punjv@...> Date: Wednesday, April 13, 2011, 10:01 AM


Hi Vasu, 
I am not sure I understand your question but the general instructions on how to get started and use Galaxy on the cloud (i.e., Cloudman) are available at usegalaxy.org/cloud

Let us know if you that page does not answer your questions,
Enis

On Wed, Apr 13, 2011 at 9:40 AM, vasu punj <punjv@...> wrote:
I was wondering if there are instructions how can I run the Galaxy on CloudConsole, Indeed  first I want to know how Galaxy is established on console? Can someone direct me to the instructions please.
 
Thanks.
  
--- On Tue, 4/12/11, Enis Afgan <eafgan@...> wrote:

From: Enis Afgan <eafgan@...>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "Mike Dufault" <dufaultm@...>
Cc: galaxy-user@...
Date: Tuesday, April 12, 2011, 9:31 PM


Galaxy has the functionality to recover any jobs that were running after it's restarted so it is quite possible to for the job to still be running. In addition, from the cloudman console, it appears that at least one instance is pretty heavily loaded so that can also mean that the job is still running. However, without actually accessing the instance through the command line and checking the status of the job queue, it is not possible to tell if the job is - actually running. Do you know how to do that? It's just a few commands in the terminal:
- access the instance
[local]$ ssh -i <path to the private key you downloaded from AWS when you created a key pair> ubuntu@<instance public DNS>
- become galaxy user
[ec2]$ sudo su galaxy 
- list any running jobs
[ec2]$ qstat

If that command returns a list of jobs and the jobs are in stare 'r' (running), the job is still running; otherwise, no.

Let me know how it goes,
Enis

On Tue, Apr 12, 2011 at 9:49 PM, Mike Dufault <dufaultm@...> wrote:
Hi Enis,

THANK YOU!!!

I see that my "filter pileup on data" step is running. Is this the same analysis that was running before or did it relauch when you restarted Galaxy? I just don't know if the analysis would be compromised.

Thanks again to you and the whole Galaxy team.

Best,

Mike

--- On Tue, 4/12/11, Enis Afgan <eafgan@...> wrote:

From: Enis Afgan <eafgan@...>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "Mike Dufault" <dufaultm@...>
Cc: "Anton Nekrutenko" <anton@...>, galaxy-user@...
Date: Tuesday, April 12, 2011, 9:16 PM


Ahh, for some reason cloudman is thinking Galaxy is not 'running' but still 'starting' and has thus not enabled the given button. To access the analysis, in your browser, just delete the '/cloud' part of the URL and that should load Galaxy.

Sorry about the confusion,
Enis

On Tue, Apr 12, 2011 at 9:12 PM, Mike Dufault <dufaultm@...> wrote:
Hi Enis,

Thanks for looking into this.

From the Galaxy Cloudman Console, I can see that it was restarted from the log (thanks), but the "Access Galaxy" choice is still grayed out and I don't know how to access the Analysis window.

Is there a way back into my analysis?

Thanks,
Mike



--- On Tue, 4/12/11, Enis Afgan <eafgan@...> wrote:

From: Enis Afgan <eafgan@...>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "Mike Dufault" <dufaultm@...>
Cc: "Anton Nekrutenko" <anton@...>, galaxy-user@...
Date: Tuesday, April 12, 2011, 8:55 PM


Hi Mike, 
Try accessing your Galaxy instance now. It should be ok.

The link in your email contained the IP for your instance so I took the liberty of restarting Galaxy and that brought it back up. There seems to have been an issue with Galaxy accessing its database and that resulted in Galaxy crashing. We'll look into why that happened in the first place but should be ok now. 

Let me know if you have any more trouble,
Enis

On Tue, Apr 12, 2011 at 2:49 PM, Mike Dufault <dufaultm@...> wrote:
Hello Galaxy Staff,

My data has been running on the Amazon EC2 for just over 24hrs. I have not closed any windows and my Exome analysis made it all the way through to filter on Pile up. I have two tabs for this instance. One is the Galaxy Cloudman Console and the other is the tab where I perform the analysis, load data, history etc.

Anyway, I went to add a step to the work flow and the screen "Welcome Galaxy to the Cloud" screen along with the information "There is no Galaxy instance running on this host, or the Galaxy instance is not responding. To manage Galaxy on this host, please use the Cloud Console."

What happened???

When I go back to the Galaxy Cloudman Console, it shows that my instance is still running along with the four cores, the Cluster log is below. AWS also shows that my instance is running.

Will the work flow finish? Can I get my data? How?

I tried to re-access the analysis page by selecting "Access Galaxy" from the "Galaxy Cloudman Console" but it sends me to the same "Welcome page."

Is there a way to get back into the analysis page?

Please help!!!

Thanks,
Mike

The cluster log shows:
  • 13:05:24 - Master starting
  • 13:05:25 - Completed initial cluster configuration.
  • 13:05:33 - Starting service 'SGE'
  • 13:05:48 - Configuring SGE...
  • 13:05:56 - Successfully setup SGE; configuring SGE
  • 13:05:57 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Saved file 'cm_boot.py' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Saved file 'cm.tar.gz' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Problem connecting to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3', attempt 1/5
  • 13:05:59 - Saved file 'Fam122261.clusterName' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:06:24 - Initializing a 'Galaxy' cluster.
  • 13:06:24 - Retrieved file 'snaps.yaml' from bucket 'cloudman' to 'cm_snaps.yaml'.
  • 13:06:41 - Adding 3 instance(s)...
  • 13:07:02 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'universe_wsgi.ini.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'tool_conf.xml.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:48 - Error mounting file system '/mnt/galaxyData' from '/dev/sdg3', running command '/bin/mount /dev/sdg3 /mnt/galaxyData' returned code '32' and following stderr: 'mount: you must specify the filesystem type '
  • 13:07:52 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:52 - Starting service 'Postgres'
  • 13:07:52 - PostgreSQL data directory '/mnt/galaxyData/pgsql/data' does not exist (yet?)
  • 13:07:52 - Configuring PostgreSQL with a database for Galaxy...
  • 13:08:05 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:05 - Starting service 'Galaxy'
  • 13:08:05 - Galaxy daemon not running.
  • 13:08:05 - Galaxy service state changed from 'Starting' to 'Error'
  • 13:08:05 - Setting up Galaxy application
  • 13:08:05 - Retrieved file 'universe_wsgi.ini.cloud' from bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3' to '/mnt/galaxyTools/galaxy-central/universe_wsgi.ini'.
  • 13:08:05 - Retrieved file 'tool_conf.xml.cloud' from bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3' to '/mnt/galaxyTools/galaxy-central/tool_conf.xml'.
  • 13:08:05 - Retrieved file 'tool_data_table_conf.xml.cloud' from bucket 'cloudman' to '/mnt/galaxyTools/galaxy-central/tool_data_table_conf.xml.cloud'.
  • 13:08:05 - Starting Galaxy...
  • 13:08:09 - Galaxy service state changed from 'Error' to 'Starting'
  • 13:08:09 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:09 - Saved file 'tool_data_table_conf.xml.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:28 - Instance 'i-e46f0a8b' reported alive
  • 13:08:28 - Successfully generated root user's public key.
  • 13:08:28 - Sent master public key to worker instance 'i-e46f0a8b'.
  • 13:08:28 - Instance 'i-e26f0a8d' reported alive
  • 13:08:28 - Sent master public key to worker instance 'i-e26f0a8d'.
  • 13:08:33 - Instance 'i-e06f0a8f' reported alive
  • 13:08:33 - Sent master public key to worker instance 'i-e06f0a8f'.
  • 13:08:33 - Adding instance i-e46f0a8b to SGE Execution Host list
  • 13:08:44 - Successfully added instance 'i-e46f0a8b' to SGE
  • 13:08:44 - Waiting on worker instance 'i-e46f0a8b' to configure itself...
  • 13:08:44 - Instance 'i-e26f0a8d' already in SGE's @allhosts
  • 13:08:44 - Waiting on worker instance 'i-e26f0a8d' to configure itself...
  • 13:08:45 - Instance 'i-e06f0a8f' already in SGE's @allhosts
  • 13:08:45 - Waiting on worker instance 'i-e06f0a8f' to configure itself...
  • 13:08:50 - Instance 'i-e46f0a8b' ready
  • 13:09:27 - Galaxy service state changed from 'Starting' to 'Running'
  • 22:38:18 - Found '3' idle instances; trying to remove '2'
  • 22:38:18 - Specific termination of instance 'i-e26f0a8d' requested.
  • 22:38:18 - Removing instance 'i-e26f0a8d' from SGE
  • 22:38:18 - Successfully updated @allhosts to remove 'i-e26f0a8d'
  • 22:38:19 - Terminating instance 'i-e26f0a8d'
  • 22:38:19 - Initiated requested termination of instance. Terminating 'i-e26f0a8d'.
  • 22:38:19 - Specific termination of instance 'i-e46f0a8b' requested.
  • 22:38:19 - Removing instance 'i-e46f0a8b' from SGE
  • 22:38:19 - Successfully initiated termination of instance 'i-e26f0a8d'
  • 22:38:19 - Successfully updated @allhosts to remove 'i-e46f0a8b'
  • 22:38:20 - Terminating instance 'i-e46f0a8b'
  • 22:38:20 - Initiated requested termination of instance. Terminating 'i-e46f0a8b'.
  • 22:38:20 - Initiated requested termination of instances. Terminating '2' instances.
  • 22:38:20 - Successfully initiated termination of instance 'i-e46f0a8b'
  • 22:38:41 - Found '1' idle instances; trying to remove '1'
  • 22:38:41 - Specific termination of instance 'i-e06f0a8f' requested.
  • 22:38:41 - Removing instance 'i-e06f0a8f' from SGE
  • 22:38:41 - Successfully updated @allhosts to remove 'i-e06f0a8f'
  • 22:38:42 - Initiated requested termination of instance. Terminating 'i-e06f0a8f'.
  • 22:38:42 - Initiated requested termination of instances. Terminating '1' instances.
  • 22:38:42 - Terminating instance 'i-e06f0a8f'
  • 22:38:42 - Successfully initiated termination of instance 'i-e06f0a8f'
  • 22:38:47 - Instance 'i-e26f0a8d' successfully terminated.
  • 22:38:49 - Instance 'i-e46f0a8b' successfully terminated.
  • 22:38:59 - Adding 3 instance(s)...
  • 22:39:07 - Instance 'i-e06f0a8f' successfully terminated.
  • 22:41:02 - Instance 'i-fa096e95' reported alive
  • 22:41:02 - Sent master public key to worker instance 'i-fa096e95'.
  • 22:41:06 - Adding instance i-fa096e95 to SGE Execution Host list
  • 22:41:17 - Successfully added instance 'i-fa096e95' to SGE
  • 22:41:17 - Waiting on worker instance 'i-fa096e95' to configure itself...
  • 22:41:17 - Instance 'i-fe096e91' reported alive
  • 22:41:17 - Sent master public key to worker instance 'i-fe096e91'.
  • 22:41:22 - Adding instance i-fe096e91 to SGE Execution Host list
  • 22:41:34 - Successfully added instance 'i-fe096e91' to SGE
  • 22:41:34 - Waiting on worker instance 'i-fe096e91' to configure itself...
  • 22:41:34 - Instance 'i-fa096e95' ready
  • 22:41:52 - Instance 'i-fe096e91' ready
  • 22:42:28 - Instance 'i-fc096e93' reported alive
  • 22:42:28 - Sent master public key to worker instance 'i-fc096e93'.
  • 22:42:38 - Adding instance i-fc096e93 to SGE Execution Host list
  • 22:42:49 - Successfully added instance 'i-fc096e93' to SGE
  • 22:42:49 - Waiting on worker instance 'i-fc096e93' to configure itself...
  • 22:43:13 - Instance 'i-fc096e93' ready
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory


___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

 http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

 http://lists.bx.psu.edu/




-----Inline Attachment Follows-----


___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

  http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

  http://lists.bx.psu.edu/


-----Inline Attachment Follows-----

___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

  http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

  http://lists.bx.psu.edu/



___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

  http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

  http://lists.bx.psu.edu/
Reply | Threaded
Open this post in threaded view
|

Re: Help!!!!!! with Galaxy Cloud!!!!!

Mike Dufault
Hi Enis,

With the exception of the BAM file (4.1Gb) I have been able to download everything that I have tried using the "disk" from the history panel. I think the 4.1Gb file is just too large because I keep getting a memory error. Since I am still new to the whole AWS-EC2 set-up, I have not fooled around to much with the instance set up. I have followed the screen-cast give by Anton. Perhaps I need to change the memory settings when I create the instance. It seems it would be better if I could download if from the "disk" icon since then it would be in the correct BAM format.

Anyway, I will try scp directions that you have provided and find out how to convert from .dat back to (or somehow extract the) BAM file once I get it to my local machine. Each step is one step closer.

Thanks again,
Mike

--- On Fri, 4/15/11, Enis Afgan <[hidden email]> wrote:

From: Enis Afgan <[hidden email]>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "Mike Dufault" <[hidden email]>
Cc: [hidden email]
Date: Friday, April 15, 2011, 8:21 AM

Hi Mike, 
You should be able to download the desired file(s) directly from Galaxy by expanding the desired history item and then clicking the 'disk' (i.e., download) icon. 

Alternatively, if you want to copy the file by hand directly from the file system on the instance, the following is the command to execute (note that the command is executed from your local machine):
scp -i <path to your AWS key pair file> ubuntu@<instance public DNS>:/mnt/galaxyData/files/000/<file name> .
(also, note the '.' at the end of the command) 
This command will copy the remote file to your local machine and it will put it in your current directory.

Once downloaded, the file can probably be opened with any text editor (unless it's a binary file, in which case it will have to be opened with the appropriate tool that can read the given file format).

Enis

On Fri, Apr 15, 2011 at 12:32 AM, Mike Dufault <dufaultm@...> wrote:
Hello again,

So I am able to see all of the .dat files in /mnt/galaxyData. What commands can I use to download a file to my HD? Also, what program should I use to open the .dat file?

Thanks again,

Mike

--- On Wed, 4/13/11, Enis Afgan <eafgan@...> wrote:

From: Enis Afgan <eafgan@...>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "Mike Dufault" <dufaultm@...>
Cc: galaxy-user@...
Date: Wednesday, April 13, 2011, 11:15 PM


Hi Mike, 
Once the given EBS volume is attached and mounted, all of the data should be in /mnt/galaxyData/files/000/
This assumes the file system is mounted to /mnt/galaxyData, which is where it would get mounted to automatically by cloudman on cluster instantiation.

Enis

On Wed, Apr 13, 2011 at 9:00 PM, Mike Dufault <dufaultm@...> wrote:

Hi Enis,
 
I started to use the terminal to check to see if the job was running, but it stopped successfully at the same time. Thanks again for helping me to complete the run.
 
Now I have an additional issue. I wanted to save my BAM file, but I kept getting an error. I think the error was because it was too large to send (4.1GB). So I saved what I could to my local HD and terminated the cluster. My EBS volume is 200GB and persisted after the cluster was terminated.
 
I assume that my BAM file resides somewhere in the EBS volume. I started a new Unix cluster and "attached" the EBS to that cluster. I also established an ssh to the Unis cluster but I do not know where to find the BAM file. Do you know how I can access the BAM file so that I can transfer it to my local HD?
 
Thanks,
Mike

--- On Wed, 4/13/11, Enis Afgan <eafgan@...> wrote:

From: Enis Afgan <eafgan@...>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "vasu punj" <punjv@...> Date: Wednesday, April 13, 2011, 10:01 AM


Hi Vasu, 
I am not sure I understand your question but the general instructions on how to get started and use Galaxy on the cloud (i.e., Cloudman) are available at usegalaxy.org/cloud

Let us know if you that page does not answer your questions,
Enis

On Wed, Apr 13, 2011 at 9:40 AM, vasu punj <punjv@...> wrote:
I was wondering if there are instructions how can I run the Galaxy on CloudConsole, Indeed  first I want to know how Galaxy is established on console? Can someone direct me to the instructions please.
 
Thanks.
  
--- On Tue, 4/12/11, Enis Afgan <eafgan@...> wrote:

From: Enis Afgan <eafgan@...>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "Mike Dufault" <dufaultm@...>
Cc: galaxy-user@...
Date: Tuesday, April 12, 2011, 9:31 PM


Galaxy has the functionality to recover any jobs that were running after it's restarted so it is quite possible to for the job to still be running. In addition, from the cloudman console, it appears that at least one instance is pretty heavily loaded so that can also mean that the job is still running. However, without actually accessing the instance through the command line and checking the status of the job queue, it is not possible to tell if the job is - actually running. Do you know how to do that? It's just a few commands in the terminal:
- access the instance
[local]$ ssh -i <path to the private key you downloaded from AWS when you created a key pair> ubuntu@<instance public DNS>
- become galaxy user
[ec2]$ sudo su galaxy 
- list any running jobs
[ec2]$ qstat

If that command returns a list of jobs and the jobs are in stare 'r' (running), the job is still running; otherwise, no.

Let me know how it goes,
Enis

On Tue, Apr 12, 2011 at 9:49 PM, Mike Dufault <dufaultm@...> wrote:
Hi Enis,

THANK YOU!!!

I see that my "filter pileup on data" step is running. Is this the same analysis that was running before or did it relauch when you restarted Galaxy? I just don't know if the analysis would be compromised.

Thanks again to you and the whole Galaxy team.

Best,

Mike

--- On Tue, 4/12/11, Enis Afgan <eafgan@...> wrote:

From: Enis Afgan <eafgan@...>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "Mike Dufault" <dufaultm@...>
Cc: "Anton Nekrutenko" <anton@...>, galaxy-user@...
Date: Tuesday, April 12, 2011, 9:16 PM


Ahh, for some reason cloudman is thinking Galaxy is not 'running' but still 'starting' and has thus not enabled the given button. To access the analysis, in your browser, just delete the '/cloud' part of the URL and that should load Galaxy.

Sorry about the confusion,
Enis

On Tue, Apr 12, 2011 at 9:12 PM, Mike Dufault <dufaultm@...> wrote:
Hi Enis,

Thanks for looking into this.

From the Galaxy Cloudman Console, I can see that it was restarted from the log (thanks), but the "Access Galaxy" choice is still grayed out and I don't know how to access the Analysis window.

Is there a way back into my analysis?

Thanks,
Mike



--- On Tue, 4/12/11, Enis Afgan <eafgan@...> wrote:

From: Enis Afgan <eafgan@...>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "Mike Dufault" <dufaultm@...>
Cc: "Anton Nekrutenko" <anton@...>, galaxy-user@...
Date: Tuesday, April 12, 2011, 8:55 PM


Hi Mike, 
Try accessing your Galaxy instance now. It should be ok.

The link in your email contained the IP for your instance so I took the liberty of restarting Galaxy and that brought it back up. There seems to have been an issue with Galaxy accessing its database and that resulted in Galaxy crashing. We'll look into why that happened in the first place but should be ok now. 

Let me know if you have any more trouble,
Enis

On Tue, Apr 12, 2011 at 2:49 PM, Mike Dufault <dufaultm@...> wrote:
Hello Galaxy Staff,

My data has been running on the Amazon EC2 for just over 24hrs. I have not closed any windows and my Exome analysis made it all the way through to filter on Pile up. I have two tabs for this instance. One is the Galaxy Cloudman Console and the other is the tab where I perform the analysis, load data, history etc.

Anyway, I went to add a step to the work flow and the screen "Welcome Galaxy to the Cloud" screen along with the information "There is no Galaxy instance running on this host, or the Galaxy instance is not responding. To manage Galaxy on this host, please use the Cloud Console."

What happened???

When I go back to the Galaxy Cloudman Console, it shows that my instance is still running along with the four cores, the Cluster log is below. AWS also shows that my instance is running.

Will the work flow finish? Can I get my data? How?

I tried to re-access the analysis page by selecting "Access Galaxy" from the "Galaxy Cloudman Console" but it sends me to the same "Welcome page."

Is there a way to get back into the analysis page?

Please help!!!

Thanks,
Mike

The cluster log shows:
  • 13:05:24 - Master starting
  • 13:05:25 - Completed initial cluster configuration.
  • 13:05:33 - Starting service 'SGE'
  • 13:05:48 - Configuring SGE...
  • 13:05:56 - Successfully setup SGE; configuring SGE
  • 13:05:57 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Saved file 'cm_boot.py' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Saved file 'cm.tar.gz' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Problem connecting to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3', attempt 1/5
  • 13:05:59 - Saved file 'Fam122261.clusterName' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:06:24 - Initializing a 'Galaxy' cluster.
  • 13:06:24 - Retrieved file 'snaps.yaml' from bucket 'cloudman' to 'cm_snaps.yaml'.
  • 13:06:41 - Adding 3 instance(s)...
  • 13:07:02 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'universe_wsgi.ini.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'tool_conf.xml.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:48 - Error mounting file system '/mnt/galaxyData' from '/dev/sdg3', running command '/bin/mount /dev/sdg3 /mnt/galaxyData' returned code '32' and following stderr: 'mount: you must specify the filesystem type '
  • 13:07:52 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:52 - Starting service 'Postgres'
  • 13:07:52 - PostgreSQL data directory '/mnt/galaxyData/pgsql/data' does not exist (yet?)
  • 13:07:52 - Configuring PostgreSQL with a database for Galaxy...
  • 13:08:05 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:05 - Starting service 'Galaxy'
  • 13:08:05 - Galaxy daemon not running.
  • 13:08:05 - Galaxy service state changed from 'Starting' to 'Error'
  • 13:08:05 - Setting up Galaxy application
  • 13:08:05 - Retrieved file 'universe_wsgi.ini.cloud' from bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3' to '/mnt/galaxyTools/galaxy-central/universe_wsgi.ini'.
  • 13:08:05 - Retrieved file 'tool_conf.xml.cloud' from bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3' to '/mnt/galaxyTools/galaxy-central/tool_conf.xml'.
  • 13:08:05 - Retrieved file 'tool_data_table_conf.xml.cloud' from bucket 'cloudman' to '/mnt/galaxyTools/galaxy-central/tool_data_table_conf.xml.cloud'.
  • 13:08:05 - Starting Galaxy...
  • 13:08:09 - Galaxy service state changed from 'Error' to 'Starting'
  • 13:08:09 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:09 - Saved file 'tool_data_table_conf.xml.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:28 - Instance 'i-e46f0a8b' reported alive
  • 13:08:28 - Successfully generated root user's public key.
  • 13:08:28 - Sent master public key to worker instance 'i-e46f0a8b'.
  • 13:08:28 - Instance 'i-e26f0a8d' reported alive
  • 13:08:28 - Sent master public key to worker instance 'i-e26f0a8d'.
  • 13:08:33 - Instance 'i-e06f0a8f' reported alive
  • 13:08:33 - Sent master public key to worker instance 'i-e06f0a8f'.
  • 13:08:33 - Adding instance i-e46f0a8b to SGE Execution Host list
  • 13:08:44 - Successfully added instance 'i-e46f0a8b' to SGE
  • 13:08:44 - Waiting on worker instance 'i-e46f0a8b' to configure itself...
  • 13:08:44 - Instance 'i-e26f0a8d' already in SGE's @allhosts
  • 13:08:44 - Waiting on worker instance 'i-e26f0a8d' to configure itself...
  • 13:08:45 - Instance 'i-e06f0a8f' already in SGE's @allhosts
  • 13:08:45 - Waiting on worker instance 'i-e06f0a8f' to configure itself...
  • 13:08:50 - Instance 'i-e46f0a8b' ready
  • 13:09:27 - Galaxy service state changed from 'Starting' to 'Running'
  • 22:38:18 - Found '3' idle instances; trying to remove '2'
  • 22:38:18 - Specific termination of instance 'i-e26f0a8d' requested.
  • 22:38:18 - Removing instance 'i-e26f0a8d' from SGE
  • 22:38:18 - Successfully updated @allhosts to remove 'i-e26f0a8d'
  • 22:38:19 - Terminating instance 'i-e26f0a8d'
  • 22:38:19 - Initiated requested termination of instance. Terminating 'i-e26f0a8d'.
  • 22:38:19 - Specific termination of instance 'i-e46f0a8b' requested.
  • 22:38:19 - Removing instance 'i-e46f0a8b' from SGE
  • 22:38:19 - Successfully initiated termination of instance 'i-e26f0a8d'
  • 22:38:19 - Successfully updated @allhosts to remove 'i-e46f0a8b'
  • 22:38:20 - Terminating instance 'i-e46f0a8b'
  • 22:38:20 - Initiated requested termination of instance. Terminating 'i-e46f0a8b'.
  • 22:38:20 - Initiated requested termination of instances. Terminating '2' instances.
  • 22:38:20 - Successfully initiated termination of instance 'i-e46f0a8b'
  • 22:38:41 - Found '1' idle instances; trying to remove '1'
  • 22:38:41 - Specific termination of instance 'i-e06f0a8f' requested.
  • 22:38:41 - Removing instance 'i-e06f0a8f' from SGE
  • 22:38:41 - Successfully updated @allhosts to remove 'i-e06f0a8f'
  • 22:38:42 - Initiated requested termination of instance. Terminating 'i-e06f0a8f'.
  • 22:38:42 - Initiated requested termination of instances. Terminating '1' instances.
  • 22:38:42 - Terminating instance 'i-e06f0a8f'
  • 22:38:42 - Successfully initiated termination of instance 'i-e06f0a8f'
  • 22:38:47 - Instance 'i-e26f0a8d' successfully terminated.
  • 22:38:49 - Instance 'i-e46f0a8b' successfully terminated.
  • 22:38:59 - Adding 3 instance(s)...
  • 22:39:07 - Instance 'i-e06f0a8f' successfully terminated.
  • 22:41:02 - Instance 'i-fa096e95' reported alive
  • 22:41:02 - Sent master public key to worker instance 'i-fa096e95'.
  • 22:41:06 - Adding instance i-fa096e95 to SGE Execution Host list
  • 22:41:17 - Successfully added instance 'i-fa096e95' to SGE
  • 22:41:17 - Waiting on worker instance 'i-fa096e95' to configure itself...
  • 22:41:17 - Instance 'i-fe096e91' reported alive
  • 22:41:17 - Sent master public key to worker instance 'i-fe096e91'.
  • 22:41:22 - Adding instance i-fe096e91 to SGE Execution Host list
  • 22:41:34 - Successfully added instance 'i-fe096e91' to SGE
  • 22:41:34 - Waiting on worker instance 'i-fe096e91' to configure itself...
  • 22:41:34 - Instance 'i-fa096e95' ready
  • 22:41:52 - Instance 'i-fe096e91' ready
  • 22:42:28 - Instance 'i-fc096e93' reported alive
  • 22:42:28 - Sent master public key to worker instance 'i-fc096e93'.
  • 22:42:38 - Adding instance i-fc096e93 to SGE Execution Host list
  • 22:42:49 - Successfully added instance 'i-fc096e93' to SGE
  • 22:42:49 - Waiting on worker instance 'i-fc096e93' to configure itself...
  • 22:43:13 - Instance 'i-fc096e93' ready
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory


___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

 http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

 http://lists.bx.psu.edu/




-----Inline Attachment Follows-----


___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

  http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

  http://lists.bx.psu.edu/


-----Inline Attachment Follows-----

___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

  http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

  http://lists.bx.psu.edu/



___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

  http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

  http://lists.bx.psu.edu/
Reply | Threaded
Open this post in threaded view
|

Re: Help!!!!!! with Galaxy Cloud!!!!!

Enis Afgan-2
I don't think you'll need to convert the file other than maybe renaming the extension (mv <filename>.dat filename.bam>) because Galaxy just adds that same extension to each file while the metadata that it keeps tell it which format the file is in. Just try opening the file up using the tool you were planning to use and it should work.

Enis

On Fri, Apr 15, 2011 at 8:49 AM, Mike Dufault <[hidden email]> wrote:
Hi Enis,

With the exception of the BAM file (4.1Gb) I have been able to download everything that I have tried using the "disk" from the history panel. I think the 4.1Gb file is just too large because I keep getting a memory error. Since I am still new to the whole AWS-EC2 set-up, I have not fooled around to much with the instance set up. I have followed the screen-cast give by Anton. Perhaps I need to change the memory settings when I create the instance. It seems it would be better if I could download if from the "disk" icon since then it would be in the correct BAM format.

Anyway, I will try scp directions that you have provided and find out how to convert from .dat back to (or somehow extract the) BAM file once I get it to my local machine. Each step is one step closer.

Thanks again,
Mike


--- On Fri, 4/15/11, Enis Afgan <[hidden email]> wrote:

From: Enis Afgan <[hidden email]>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "Mike Dufault" <[hidden email]>
Cc: [hidden email]
Date: Friday, April 15, 2011, 8:21 AM


Hi Mike, 
You should be able to download the desired file(s) directly from Galaxy by expanding the desired history item and then clicking the 'disk' (i.e., download) icon. 

Alternatively, if you want to copy the file by hand directly from the file system on the instance, the following is the command to execute (note that the command is executed from your local machine):
scp -i <path to your AWS key pair file> ubuntu@<instance public DNS>:/mnt/galaxyData/files/000/<file name> .
(also, note the '.' at the end of the command) 
This command will copy the remote file to your local machine and it will put it in your current directory.

Once downloaded, the file can probably be opened with any text editor (unless it's a binary file, in which case it will have to be opened with the appropriate tool that can read the given file format).

Enis

On Fri, Apr 15, 2011 at 12:32 AM, Mike Dufault <dufaultm@...> wrote:
Hello again,

So I am able to see all of the .dat files in /mnt/galaxyData. What commands can I use to download a file to my HD? Also, what program should I use to open the .dat file?

Thanks again,

Mike

--- On Wed, 4/13/11, Enis Afgan <eafgan@...> wrote:

From: Enis Afgan <eafgan@...>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "Mike Dufault" <dufaultm@...>
Cc: galaxy-user@...
Date: Wednesday, April 13, 2011, 11:15 PM


Hi Mike, 
Once the given EBS volume is attached and mounted, all of the data should be in /mnt/galaxyData/files/000/
This assumes the file system is mounted to /mnt/galaxyData, which is where it would get mounted to automatically by cloudman on cluster instantiation.

Enis

On Wed, Apr 13, 2011 at 9:00 PM, Mike Dufault <dufaultm@...> wrote:

Hi Enis,
 
I started to use the terminal to check to see if the job was running, but it stopped successfully at the same time. Thanks again for helping me to complete the run.
 
Now I have an additional issue. I wanted to save my BAM file, but I kept getting an error. I think the error was because it was too large to send (4.1GB). So I saved what I could to my local HD and terminated the cluster. My EBS volume is 200GB and persisted after the cluster was terminated.
 
I assume that my BAM file resides somewhere in the EBS volume. I started a new Unix cluster and "attached" the EBS to that cluster. I also established an ssh to the Unis cluster but I do not know where to find the BAM file. Do you know how I can access the BAM file so that I can transfer it to my local HD?
 
Thanks,
Mike

--- On Wed, 4/13/11, Enis Afgan <eafgan@...> wrote:

From: Enis Afgan <eafgan@...>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "vasu punj" <punjv@...> Date: Wednesday, April 13, 2011, 10:01 AM


Hi Vasu, 
I am not sure I understand your question but the general instructions on how to get started and use Galaxy on the cloud (i.e., Cloudman) are available at usegalaxy.org/cloud

Let us know if you that page does not answer your questions,
Enis

On Wed, Apr 13, 2011 at 9:40 AM, vasu punj <punjv@...> wrote:
I was wondering if there are instructions how can I run the Galaxy on CloudConsole, Indeed  first I want to know how Galaxy is established on console? Can someone direct me to the instructions please.
 
Thanks.
  
--- On Tue, 4/12/11, Enis Afgan <eafgan@...> wrote:

From: Enis Afgan <eafgan@...>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "Mike Dufault" <dufaultm@...>
Cc: galaxy-user@...
Date: Tuesday, April 12, 2011, 9:31 PM


Galaxy has the functionality to recover any jobs that were running after it's restarted so it is quite possible to for the job to still be running. In addition, from the cloudman console, it appears that at least one instance is pretty heavily loaded so that can also mean that the job is still running. However, without actually accessing the instance through the command line and checking the status of the job queue, it is not possible to tell if the job is - actually running. Do you know how to do that? It's just a few commands in the terminal:
- access the instance
[local]$ ssh -i <path to the private key you downloaded from AWS when you created a key pair> ubuntu@<instance public DNS>
- become galaxy user
[ec2]$ sudo su galaxy 
- list any running jobs
[ec2]$ qstat

If that command returns a list of jobs and the jobs are in stare 'r' (running), the job is still running; otherwise, no.

Let me know how it goes,
Enis

On Tue, Apr 12, 2011 at 9:49 PM, Mike Dufault <dufaultm@...> wrote:
Hi Enis,

THANK YOU!!!

I see that my "filter pileup on data" step is running. Is this the same analysis that was running before or did it relauch when you restarted Galaxy? I just don't know if the analysis would be compromised.

Thanks again to you and the whole Galaxy team.

Best,

Mike

--- On Tue, 4/12/11, Enis Afgan <eafgan@...> wrote:

From: Enis Afgan <eafgan@...>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "Mike Dufault" <dufaultm@...>
Cc: "Anton Nekrutenko" <anton@...>, galaxy-user@...
Date: Tuesday, April 12, 2011, 9:16 PM


Ahh, for some reason cloudman is thinking Galaxy is not 'running' but still 'starting' and has thus not enabled the given button. To access the analysis, in your browser, just delete the '/cloud' part of the URL and that should load Galaxy.

Sorry about the confusion,
Enis

On Tue, Apr 12, 2011 at 9:12 PM, Mike Dufault <dufaultm@...> wrote:
Hi Enis,

Thanks for looking into this.

From the Galaxy Cloudman Console, I can see that it was restarted from the log (thanks), but the "Access Galaxy" choice is still grayed out and I don't know how to access the Analysis window.

Is there a way back into my analysis?

Thanks,
Mike



--- On Tue, 4/12/11, Enis Afgan <eafgan@...> wrote:

From: Enis Afgan <eafgan@...>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "Mike Dufault" <dufaultm@...>
Cc: "Anton Nekrutenko" <anton@...>, galaxy-user@...
Date: Tuesday, April 12, 2011, 8:55 PM


Hi Mike, 
Try accessing your Galaxy instance now. It should be ok.

The link in your email contained the IP for your instance so I took the liberty of restarting Galaxy and that brought it back up. There seems to have been an issue with Galaxy accessing its database and that resulted in Galaxy crashing. We'll look into why that happened in the first place but should be ok now. 

Let me know if you have any more trouble,
Enis

On Tue, Apr 12, 2011 at 2:49 PM, Mike Dufault <dufaultm@...> wrote:
Hello Galaxy Staff,

My data has been running on the Amazon EC2 for just over 24hrs. I have not closed any windows and my Exome analysis made it all the way through to filter on Pile up. I have two tabs for this instance. One is the Galaxy Cloudman Console and the other is the tab where I perform the analysis, load data, history etc.

Anyway, I went to add a step to the work flow and the screen "Welcome Galaxy to the Cloud" screen along with the information "There is no Galaxy instance running on this host, or the Galaxy instance is not responding. To manage Galaxy on this host, please use the Cloud Console."

What happened???

When I go back to the Galaxy Cloudman Console, it shows that my instance is still running along with the four cores, the Cluster log is below. AWS also shows that my instance is running.

Will the work flow finish? Can I get my data? How?

I tried to re-access the analysis page by selecting "Access Galaxy" from the "Galaxy Cloudman Console" but it sends me to the same "Welcome page."

Is there a way to get back into the analysis page?

Please help!!!

Thanks,
Mike

The cluster log shows:
  • 13:05:24 - Master starting
  • 13:05:25 - Completed initial cluster configuration.
  • 13:05:33 - Starting service 'SGE'
  • 13:05:48 - Configuring SGE...
  • 13:05:56 - Successfully setup SGE; configuring SGE
  • 13:05:57 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Saved file 'cm_boot.py' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Saved file 'cm.tar.gz' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Problem connecting to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3', attempt 1/5
  • 13:05:59 - Saved file 'Fam122261.clusterName' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:06:24 - Initializing a 'Galaxy' cluster.
  • 13:06:24 - Retrieved file 'snaps.yaml' from bucket 'cloudman' to 'cm_snaps.yaml'.
  • 13:06:41 - Adding 3 instance(s)...
  • 13:07:02 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'universe_wsgi.ini.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'tool_conf.xml.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:48 - Error mounting file system '/mnt/galaxyData' from '/dev/sdg3', running command '/bin/mount /dev/sdg3 /mnt/galaxyData' returned code '32' and following stderr: 'mount: you must specify the filesystem type '
  • 13:07:52 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:52 - Starting service 'Postgres'
  • 13:07:52 - PostgreSQL data directory '/mnt/galaxyData/pgsql/data' does not exist (yet?)
  • 13:07:52 - Configuring PostgreSQL with a database for Galaxy...
  • 13:08:05 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:05 - Starting service 'Galaxy'
  • 13:08:05 - Galaxy daemon not running.
  • 13:08:05 - Galaxy service state changed from 'Starting' to 'Error'
  • 13:08:05 - Setting up Galaxy application
  • 13:08:05 - Retrieved file 'universe_wsgi.ini.cloud' from bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3' to '/mnt/galaxyTools/galaxy-central/universe_wsgi.ini'.
  • 13:08:05 - Retrieved file 'tool_conf.xml.cloud' from bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3' to '/mnt/galaxyTools/galaxy-central/tool_conf.xml'.
  • 13:08:05 - Retrieved file 'tool_data_table_conf.xml.cloud' from bucket 'cloudman' to '/mnt/galaxyTools/galaxy-central/tool_data_table_conf.xml.cloud'.
  • 13:08:05 - Starting Galaxy...
  • 13:08:09 - Galaxy service state changed from 'Error' to 'Starting'
  • 13:08:09 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:09 - Saved file 'tool_data_table_conf.xml.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:28 - Instance 'i-e46f0a8b' reported alive
  • 13:08:28 - Successfully generated root user's public key.
  • 13:08:28 - Sent master public key to worker instance 'i-e46f0a8b'.
  • 13:08:28 - Instance 'i-e26f0a8d' reported alive
  • 13:08:28 - Sent master public key to worker instance 'i-e26f0a8d'.
  • 13:08:33 - Instance 'i-e06f0a8f' reported alive
  • 13:08:33 - Sent master public key to worker instance 'i-e06f0a8f'.
  • 13:08:33 - Adding instance i-e46f0a8b to SGE Execution Host list
  • 13:08:44 - Successfully added instance 'i-e46f0a8b' to SGE
  • 13:08:44 - Waiting on worker instance 'i-e46f0a8b' to configure itself...
  • 13:08:44 - Instance 'i-e26f0a8d' already in SGE's @allhosts
  • 13:08:44 - Waiting on worker instance 'i-e26f0a8d' to configure itself...
  • 13:08:45 - Instance 'i-e06f0a8f' already in SGE's @allhosts
  • 13:08:45 - Waiting on worker instance 'i-e06f0a8f' to configure itself...
  • 13:08:50 - Instance 'i-e46f0a8b' ready
  • 13:09:27 - Galaxy service state changed from 'Starting' to 'Running'
  • 22:38:18 - Found '3' idle instances; trying to remove '2'
  • 22:38:18 - Specific termination of instance 'i-e26f0a8d' requested.
  • 22:38:18 - Removing instance 'i-e26f0a8d' from SGE
  • 22:38:18 - Successfully updated @allhosts to remove 'i-e26f0a8d'
  • 22:38:19 - Terminating instance 'i-e26f0a8d'
  • 22:38:19 - Initiated requested termination of instance. Terminating 'i-e26f0a8d'.
  • 22:38:19 - Specific termination of instance 'i-e46f0a8b' requested.
  • 22:38:19 - Removing instance 'i-e46f0a8b' from SGE
  • 22:38:19 - Successfully initiated termination of instance 'i-e26f0a8d'
  • 22:38:19 - Successfully updated @allhosts to remove 'i-e46f0a8b'
  • 22:38:20 - Terminating instance 'i-e46f0a8b'
  • 22:38:20 - Initiated requested termination of instance. Terminating 'i-e46f0a8b'.
  • 22:38:20 - Initiated requested termination of instances. Terminating '2' instances.
  • 22:38:20 - Successfully initiated termination of instance 'i-e46f0a8b'
  • 22:38:41 - Found '1' idle instances; trying to remove '1'
  • 22:38:41 - Specific termination of instance 'i-e06f0a8f' requested.
  • 22:38:41 - Removing instance 'i-e06f0a8f' from SGE
  • 22:38:41 - Successfully updated @allhosts to remove 'i-e06f0a8f'
  • 22:38:42 - Initiated requested termination of instance. Terminating 'i-e06f0a8f'.
  • 22:38:42 - Initiated requested termination of instances. Terminating '1' instances.
  • 22:38:42 - Terminating instance 'i-e06f0a8f'
  • 22:38:42 - Successfully initiated termination of instance 'i-e06f0a8f'
  • 22:38:47 - Instance 'i-e26f0a8d' successfully terminated.
  • 22:38:49 - Instance 'i-e46f0a8b' successfully terminated.
  • 22:38:59 - Adding 3 instance(s)...
  • 22:39:07 - Instance 'i-e06f0a8f' successfully terminated.
  • 22:41:02 - Instance 'i-fa096e95' reported alive
  • 22:41:02 - Sent master public key to worker instance 'i-fa096e95'.
  • 22:41:06 - Adding instance i-fa096e95 to SGE Execution Host list
  • 22:41:17 - Successfully added instance 'i-fa096e95' to SGE
  • 22:41:17 - Waiting on worker instance 'i-fa096e95' to configure itself...
  • 22:41:17 - Instance 'i-fe096e91' reported alive
  • 22:41:17 - Sent master public key to worker instance 'i-fe096e91'.
  • 22:41:22 - Adding instance i-fe096e91 to SGE Execution Host list
  • 22:41:34 - Successfully added instance 'i-fe096e91' to SGE
  • 22:41:34 - Waiting on worker instance 'i-fe096e91' to configure itself...
  • 22:41:34 - Instance 'i-fa096e95' ready
  • 22:41:52 - Instance 'i-fe096e91' ready
  • 22:42:28 - Instance 'i-fc096e93' reported alive
  • 22:42:28 - Sent master public key to worker instance 'i-fc096e93'.
  • 22:42:38 - Adding instance i-fc096e93 to SGE Execution Host list
  • 22:42:49 - Successfully added instance 'i-fc096e93' to SGE
  • 22:42:49 - Waiting on worker instance 'i-fc096e93' to configure itself...
  • 22:43:13 - Instance 'i-fc096e93' ready
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory


___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

 http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

 http://lists.bx.psu.edu/




-----Inline Attachment Follows-----


___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

  http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

  http://lists.bx.psu.edu/


-----Inline Attachment Follows-----

___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

  http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

  http://lists.bx.psu.edu/




___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

  http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

  http://lists.bx.psu.edu/