= Mexlca - Expansion = Expand existing infrastructue to have more than 1 interpretation channel = Development Approaches = * Leave everything as is, and just work on the mexcla.tac (twisted service) * Build a form that creates the channel, and determines how many interpretation channels exist, using the mexcla.tac infrastructure. * Revamp the whole fucking thing and use freeswitch webrtc Sip.js, this option may or may not require also using the mexcla.tac. Or use nodejs (see https://github.com/englercj/node-esl). == Everything in tact == === Pros === * Easiest to accomplish * Faster * Keeps the same not perfectly beautiful interface. * Not too much work === Cons === * Not resolving current problems with mexcla (which might be a problem with the protocol) * Does not improve sound quality. * Does not improve development of webrtc. * Does not fully integrate into our webrtc infrastructure. * Limits user interaction. == Build webform for handling channel creation == === Pros === * Allows better interaction for people hosting a call. * Permits language designation for the specific interpretation channel, i.e. more info for callers. * Helps expand functionality, specific to each call. * Relatively easy to implement. * Would allow for adding features (pads, irc, calc, presentation, chat) to the channel. === Cons === * Would require work, more development and testing. * Would create more bugs. == Revamp == More info: Freeswitch supports an [http://wiki.freeswitch.org/wiki/Mod_event_socket event socket model]. There are two kinds: outbound (this is how mexcla.tac works - you call freeswitch and press 6 and your are then sent to mexcla), inbound: an external application sends you commands and you pass them on to freeswitch. Having both is critical for mexcla. If we add inbound, and we write the application in nodejs, then it means your web browser can communicate with freeswitch over websockets - and ask questions like: who is in the conference? Who is talking right now? We can also send freeswitch commands, like mute this person, etc. === Pros === * integrate live (video) and mexcla (audio) * get to learn cool new things * Tighter integration to freeswitch (probably) * Perhaps better audio quality * Would be using websockets. * We'd be cutting edge!!!! * perhaps better integration with live.m.o * USi feels they can raise more money for this one * Can provide standard conferencing tools - like ability to see who else is in the conference, mute people, etc. === Cons === * Don't know what we're doing * Possibly lots more bugs * perhaps we end up revamping live.m.o too = Language Architecture = We have a number of ideas about how to handle the interpretation infrastructure: == Current Structure == Requires the listeners to switch between the main channel and the interpretation channel. In this model a single interpreter "could" interpret in both directions between their native language and the primary language. == Ideal Structure == Two interpretation lines for each language, which would require at least two interpreters per language. In this module the primary channel would only hear the central langugae. So that one interpretation line, call it "French" will have a channel that interprets into French and a chanell that interprets into the primary language ( with the interpreter speaking into the primary channel in the central language. = Conference Architecture = In order to actualize the complexity of the ideal situation for interpretation, we think that the most effective mechanism would be for a fully structured conferencing system. That includes moderation. With a moderator we could have more automated control of the flow of interpretation. In this situation, the moderator would choose who is speaking and we'd automate which interpretation channel gets piped into the primary channel. Assuming we have 3 lines per interpretation channel, the moderator approach would allow the software to determine whether or not the third interpretation cannel gets piped into the central channel, or if there is only one interpreter then that interpreter's stream gets put into the primary channel. == Moderator Admin Interface == We'll have to create an admin interface for the moderator, which would allow them to build a stack of speakers, and to call on someone, among other things not outlined yet. We might want a way for the moderator and interpreters to be able to speak privately (or mostly privately). In case something goes wrong. == Interpreter's Interface == We'll need some kind of interpreter interface that allows the interpreter to signal to the moderator., though initially the interpreter will have the ability to speak to the main line. == Participant Interface == This interface is the standard participant interface that would include raising/lowering hand to speak, switching languages, see participants, etc. = Room Builder = This is the web form encountered prior to the creation of the conference room. Will include number of interpretation channels, and other things in a web form. = Breakdown the areas to focus = ||= Idea =|| Consideration || Status || |||| Freeswitch || infrastructure || Needs Research || |||| Webform || html || Needs Research || |||| Pad || etherpad || Needs Integration method || |||| Calc || ethercalc || Needs Integration method || |||| Presentation || etherpad or impress.js || Needs Research || |||| Chat || xmpp, irc, other || Needs Research || |||| Private Message || possibly connected to chat || Needs Research || |||| Moderation UI || Freeswitch Hooks || Needs Research || = Actions = mv and ross are reading the freeswith cookbook to gain the deeper understanding of the communication means, this does *not* include the new webrtc functionality of Freeswitch 1.4 mv will also research the presentation software to allow the uploading of presentation files for any conference call. [[Image(Freeswitch mexcla plans.png​)]]