18:50:26 Anon. Myrtle: It seems the recording has stopped 18:50:32 Anon. Hobart: ^^ 18:50:38 Anon. Murdoch: ^ 18:51:28 Anon. Grandview: professor … the recording has stopped!!! 18:51:33 Anon. Rocket: Lean On Me 18:53:54 Anon. Spiderman: yup 18:55:16 Anon. Superman: LOM Number:(412)-530-4700 18:56:38 Anon. Mantis: no 18:57:15 Anon. Mantis: depends on location of the patern in the input 18:58:33 Anon. Mantis: no 18:59:57 Anon. Heimdall: not unless its trained to 19:00:00 Anon. Green Lantern: no 19:00:04 Anon. Batman: Different location of flower 19:00:05 Anon. P.J. McArdle: no 19:00:15 Anon. BlackWidow: different subspace 19:00:27 Anon. Murray: The weights and biases have been configured for different input dimensions? 19:00:58 Anon. Heimdall: yes 19:02:51 Anon. Mantis: scan the input for the pattern 19:02:53 Anon. BlackWidow: train an MLP over all the different subspaces? 19:03:51 Anon. Frew: max 19:14:19 Anon. Heimdall: no 19:27:44 Anon. Green Lantern: Can we think of them as one subnet scanning over time, or multiple identical subnets simultaneously capturing information from each subarea, are they equivalent? 19:29:33 Anon. Murray: no 19:36:17 Anon. Hobart: yes 19:57:08 Anon. Heimdall: yes 20:02:21 Anon. Spiderman: yeah 20:02:24 Anon. S. Highland: yes 20:05:37 Anon. Spiderman: so there's one convolution for each neuron, each producing a new map, and all the maps together make up the next layer? 20:14:10 Anon. Frew: 75%