[Contents] [TitleIndex] [WordIndex

Contributor - Network module

Description

The goal of this proposal is to provide basic network features like socket, domain name resolution...

Network functions must be added to Scilab to provide:

Since Scilab 5 is not thread safe, it might be interested to perform such task in Scilab 6.

Source code

http://forge.scilab.org/index.php/p/scinetwork/

Ideas

Take inspiration from the other network API from other languages:

Roadmap

Task

More info

Priority

Status

Main tasks

Implement functions for TCP client

1

started

Implement functions for UDP client

1

Not started

Implement functions for TCP server

1

Not started

Implement functions for UDP server

1

Not started

Implement wrapping the Scilab matrix variable

1

Not started

Implement macros and examples for TCP

1

Not started

Implement macros and examples for UDP

1

Not started

Accepted GSoC proposal

The goal of this project would be to add some network capabilities to Scilab, on a very low (socket) level. It would allow for Scilab to act as both client and a server, using at least the two basic protocols - TCP and UDP protocol.

There's two main paths that we can take to implement network capabilities of Scilab. 1. Berkeley sockets is a C API that enables applications to interact between processes or over a network connection, using a network protocol (TCP, UDP, SCTP, etc.). It has been around for quite some time (a few decades), so many users are already familiar with it, as it's likely that they've used it at some point in time. It's pretty much a standard for socket level network programming. 2. Another option would be to use an external socket library. There are many candidates here, but zeromq seems to be a clear favorite when it comes to low level socket interaction. It's fully portable, thread safe and allows us to design a very complicated communication system using a very simple interface, requiring minimal effort from programmer's side. It also has stable bindings for other languages, so we could even implement this toolbox in Java (if there was a need to use a managed language) Each of these approaches has its pros and cons.

Berkley Sockets Pros

* The library has been in use for a couple of decades. Many users will be familiar with it, and will use this toolbox with ease.

* Because of its long history, the documentation for Berkeley sockets is plentiful. If the toolbox would follow the model of the C API (use the same function signatures, same constants, etc.) , the documentation could easily be applicable to both. This is easily achievable since both Scilab and C have a procedural nature.

* The library is very stable

* As the toolbox would be written as a C gateway, anyone familiar with C could extend it further

* Most of the BSD socket library has been ported to Windows, with some exceptions.

* The BSD sockets libraries come installed with the operating system (or developer package), in both Unix-like and Windows systems. Users will be able to very easily try out this toolbox, without needing to go through a complicated install of an external library

Berkley Sockets Cons

* For people with no previous BSD socket experience, it is not the simplest library to use. There is many different structures, constants, etc. Using BSD sockets, even something simple as sending an integer requires experience and fiddling with big-endian and little-endian. Of course, we could use some wrappers or Scilab macros to overcome this.

* There are a few, but distinct differences between BSD sockets and Winsock library [1]. This would mean that the toolbox code would need to have some logic like

if (this is windows)

use function A;

if (this is not windows)

use function B;

which is not the most elegant solution. Yes, this can be partially overcome with #define directives, but it is still not the nicest.

It could get very complicated if we would try to add some functionality where Scilab would act as a server that could handle more than one client.

We would have to use a different method for different operating system (for example, pthread or fork() for Unix-like systems, and threads or events in Windows)

Signals, as another way of handling multiple clients using BSD sockets, are out of the question for Windows. In short, it's not the simplest task and the testing could be painful.

Zeromq Pros

* Very powerful, high performance

* Simplistic interface, but still lots of configurable options available on a very low level

* Very easy to extend a client to interact with multiple servers, with automatic load balancing handled by zeromq.

* Zeromq has a Java binding, which would mean that the toolbox could be implemented mostly in Java. This would make the toolbox fully portable.

Zeromq Cons

* If you have no experience with zeromq, it takes a bit of time to get used to it. This might be a tricky point for myself, but the users should not be exposed to this

* relatively new project, so documentation is limited

* zeromq doesn't ship with the operating system. I am not yet sure if we could (or even should) enforce installation of zeromq alongside the installation of the toolbox. For Windows, it appears very complicated to install zeromq. [3] . It might be the case that we're OK with this, and we leave installation of zeromq to the user.

* There are some common challenges that we would have to overcome for both approaches: Testability: There is quite a few functions that need to be added to the toolbox before we can see some usual server – client communication. I would try to overcome this by having a simple server implemented in C/Java, independent of Scilab, listening on a port, and start with implementing a client in the toolbox.

Serialization

Scilab objects would have to be carefully sent and received over the network. I discovered some work done for the SOAP client as part of GSoC 2010. If this code is reliable, we could reuse it for this toolbox. I assume that this work would have to be integrated into Scilab source, so that it can be shared between two toolboxes. Depending on the health of this code, we might have to implement an object that wraps a Scilab variable, which we know how to serialize.

Providing context / constants: both zeromq and BSD sockets use a lot of constants to configure a socket. All of these would have to be somehow available from Scilab. Ideas:

For BSD sockets, if we wish for the user to be able to configure more options, we would require them to pass in a number of flags (for example, PF_INET, SOCK_STREAM, etc. ) A set of these constants would have to be available to Scilab users, either through a utility that retrieves constants given a string name, or we would parse the strings they input us, and try resolving them to constants.

Zeromq heavily depends on retrieving the 'zeromq context'. We might want to allow the user to handle this directly. In this case, we'd have to provide an interface to the user to obtain a new / old context using some sort of a context manager.

How the feature (whatever it is) will be available to the user

This feature would be developed as an external module (toolbox) Example usage:

> ct = zmq_getContext();

this would call zmq.Context() and return a pointer to Scilab, using api_scilab > socket = zmq_socket(ct, socket_type)

zmq_socket takes a context returned by zmq_getContext(), and a configurable socket type.

This would extract the pointer, and call ct.socket(socket_type), where socket_type is “SUB”, “PUB”, etc.

zmq_socket would return the socket descriptor.

The two methods above could be encapsulated into one, and we would handle the 0mq context internally only.

> zmq_setSockOpt(socket, option_name, option_value)

socket is the socket descriptor returned by zmq_socket.

Options are descibed at [2]

> zmq_connect(socket, “tcp://127.0.0.1:9999”); connects on the socket returned by zmq_socket

etc.

A realistic schedule with objectives (one every two weeks for example) and deadlines

Week 1

Wiring up: Wrap a Scilab variable (so it could be serialized easily), provide a mechanism for accessing constants, etc.

Week 2

Add methods to the toolbox so that Scilab can act as a TCP client

Week 3

Add methods to the toolbox so that Scilab can act as an UDP client. I expect this to be simpler from TCP, but some of that work might overflow to this week

Week 4

Make sure that Scilab acts correctly as a TCP / UDP client for various data types. Write some tests and fix some bugs (there will be some!)

Week 5

Enable Scilab to act as a TCP server that can handle only one client

Week 6

Verify how TCP communication between two Scilab clients works with various data types, with various operating systems. Depending on the approach we choose, this might be quite challenging. Start writing tests, fix bugs.

Week 7

Scrub bugs, finish writing tests, write some blog entries and documentation for the midterm.

Week 8

Enable Scilab to act as an UDP server that can handle only one client.

Week 9

Depending on the mentor, we could take this in various directions

a) implement some more protocols for the client (or even server?)

b) try handling multiple clients / multiple servers

c) implement a simple GUI, or some logging mechanism, for the scenario when Scilab acts as a server.

It's still to be agreed with the mentor, on what the work items would be in weeks 9, 10, 11

Week 12

Fix bugs and write tests for items from week 9, 10, 11.

Week 13

Suggested pencils down date. Write documentation, scrub code, write tests, remove redundant comments, etc.

End of GSoC


2022-09-08 09:26