- SilcDll silc_dll_load(const char *object_path, SilcDllFlags flags);
- void silc_dll_close(SilcDll dll);
- void *silc_dll_getsym(SilcDll dll, const char *symbol);
- const char *silc_dll_error(SilcDll dll);
-
- o Add directory opening/traversing functions
-
- o silc_getopt routines
-
- o silc_hash_table_replace -> silc_hash_table_set. Retain support for
- silc_hash_table_replace as macro.
-
- o The SILC Event signals. Asynchronous events that can be created,
- connected to and signalled. Either own event routines or glued into
- SilcSchedule:
-
- SilcTask silc_schedule_task_add_event(SilcSchedule schedule,
- const char *event, ...);
- SilcBool silc_schedule_event_connect(SilcSchedule schedule,
- const char *event,
- SilcTaskCallback event_callback,
- void *context);
- SilcBool silc_schedule_event_signal(SilcSchedule schedule,
- const char *event, ...);
-
- Example:
- silc_schedule_task_add_event(schedule, "connected",
- SILC_PARAM_UI32_INT,
- SILC_PARAM_BUFFER,
- SILC_PARAM_END);
- silc_schedule_event_connect(schedule, "connected", connected_cb, ctx);
- silc_schedule_event_signal(schedule, "connected", integer, buf,
- SILC_PARAM_END);
- SILC_TASK_CALLBACK(connected_cb)
- {
- FooCtx ctx = context;
- va_list args;
- SilcUInt32 integer;
- SilcBuffer buf;
-
- va_start(args, context);
- integer = va_arg(args, SilcUInt32);
- buf = va_arg(args, SilcBuffer);
- va_end(args);
- ...
- }
-
- Problems: Events would be SilcSchedule specific, and would not work on
- multi-thread/multi-scheduler system. The events should be copyable
- between schedulers. Another problem is the signal delivery. Do we
- deliver them synchronously possibly from any thread to any other thread
- or do we deliver them through the target schedulers. If we use the
- schedulers then signalling would be asynchronous (data must be
- duplicated and later freed) which is not very nice.
-
- o If the event signals are added, the SILC_PARAM_* stuff needs to be
- moved from silcbuffmt.h to silctypes.h or something similar.
-
- o In case the SILC Events are done we shall create a new concept of
- parent and child SilcSchedule's. When new SilcSchedule is created a
- parent can be associated to it. This association could be done either
- directly by the parent or by any other children. This way the signals
- would in effect be global and would reach all children schedulers.
-
- This relationship would be associative only. The schedulers are still
- independent and run independently from each other. All schedulers
- would be linked and could be accessed from any of the schedulers.
- It should be possible to retrieve the parent and enumate all children
- from any of the schedulers.
-
- SilcSchedule silc_schedule_init(int max_tasks, void *app_context,
- SilcSchedule parent);
- SilcSchedule silc_schedule_get_parent(SilcSchedule schedule);
-
- o Additional scheduler changes: optimize silc_schedule_wakeup. Wakeup
- only if the scheduler is actually waiting something. If it is
- delivering tasks wakeup is not needed.
-
- o Structured log messages to Log API. Allows machine readable log
- messages. Would allow sending of any kind of data in a log message.
-
- o Base64 to an own API
-
- o Timer API
-
- o Add builtin SOCKS and HTTP Proxy support, well the SOCKS at least.
- SILC currently supports SOCKS4 and SOCKS5 but it needs to be compiled
- in separately.
-
- o silc_stringprep to non-allocating version.
-
- o SilcStack aware SilcHashTable.
-
- o SilcStack aware SilcDList.
-
- o Thread pool API. Add this to lib/silcutil/silcthread.[ch].
-
- typedef void (*SilcThreadPoolFunc)(SilcSchedule schedule,
- void *context);
-
- /* Allocate thread pool with at least `min_threads' and at most
- `max_threads' many threads. If `stack' is non-NULL all memory
- is allocated from the `stack'. If `start_min_threads' is TRUE
- this will start `min_threads' many threads immediately. */
- SilcThreadPool silc_thread_pool_alloc(SilcStack stack,
- SilcUInt32 min_threads,
- SilcUInt32 max_threads,
- SilcBool start_min_threads);
-
- /* Free thread pool. If `wait_unfinished' is TRUE this will block
- and waits that all remaining active threads finish before freeing
- the pool. */
- void silc_thread_pool_free(SilcThreadPool tp, SilcBool wait_unfinished);
-
- /* Run `run' function with `run_context' in one of the threads in the
- thread pool. Returns FALSE if the thread pool is being freed. If
- there are no free threads left in the pool this will queue the
- the `run' and will call it once a thread becomes free.
-
- If `completion' is non-NULL it will be called to indicate completion
- of the `run' function. If `schedule' is non-NULL the `completion'
- will be called through the scheduler in the main thread. If it is
- NULL the `completion' is called directly from the thread after the
- `run' has returned. */
- SilcBool silc_thread_pool_run(SilcThreadPool tp,
- SilcSchedule schedule,
- SilcThreadPoolFunc run,
- void *run_context,
- SilcThreadPoolFunc completion,
- void *completion_context);
-
- /* Modify the amount of maximum threads of the pool. */
- void silc_thread_pool_set_max_threads(SilcThreadPool tp,
- SilcUInt32 max_threads);
-
- /* Returns the amount of maximum size the pool can grow. */
- SilcUInt32 silc_thread_pool_num_max_threads(SilcThreadPool tp);