Quantcast
Channel: Media Foundation Development for Windows Desktop forum
Viewing all 1079 articles
Browse latest View live

Suggestions for encoding 3 custom live sources, 2 audio and 1 video, to single video file

$
0
0

Hi,

i added microphone recording to my app which made me stumble over a new problem. The recording itself is working perfectly but...now i have another audio stream and the sink writer seems to be the wrong answer for direct encoding. With only one of both audio streams my encoding works, i think i have to build up a chain with a mixer MFT or some multiplexing in between right ?

I read about MFNode and the "Mixer MFT", is the only way to mix 2 audio streams together using a custom MFT ? By the way, is this Mixer MFT capable of mixing two wave sources with different channel counts together ? Also it might be a bit tricky to use this MFT because my sources are all live and i am shoveling bytes directly into media buffers, all that happens at different timings and the samples never have the same duration or size.

I am researching now and will update this thread if i find any solution, if you have any suggestions for the task i would be happy.

regards

coOKie





Must have certification (for PMP Components) for HDCP?

$
0
0
Hello, I having develop the PMP components.
I had tried applied the HDCP. but, I not have any certification for PMP components. (i.e I had created instance of PMP Media Session in the Unprotected Process option.)
In this case, do will apply HDCP? if not, must have any certification for PMP components?

Windows Media Encoder 9.0 get started time

$
0
0

Hi

If the media encoder started at 1 min ago ,

client use media playerto connect server ,

during 1 sec ,

how can i get the 61 sec not 1 sec??

I did a lot of searching in the forums anddocument ,

but i could not get a solution.

Thanks.

mixing 2 live audio streams right before encoding

$
0
0

Hi,

i added microphone recording to my app which made me stumble over a new problem. The recording itself is working perfectly but...now i have another audio stream and the sink writer seems to be the wrong answer for direct encoding. With only one of both audio streams my encoding works, i think i have to build up a chain with a mixer MFT or some multiplexing in between right ?

I read about MFNode and the "Mixer MFT", is the only way to mix 2 audio streams together using a custom MFT ? By the way, is this Mixer MFT capable of mixing two wave sources with different channel counts together ? Also it might be a bit tricky to use this MFT because my sources are all live and i am shoveling bytes directly into media buffers, all that happens at different timings and the samples never have the same duration or size.

I am researching now and will update this thread if i find any solution, if you have any suggestions for the task i would be happy.

regards

coOKie





can a custom hardware MFT be written to access hardware memory directly ?

$
0
0

Hi,

when i investigated in using the GPU for encoding i found a lot of people meantioning that the speed advantage is only marginally better. Some claimed 20 %, but when i tested out i came to an advantage of 1-2 %.

I thought that logically and technically there must be something wrong and i went inspecting my apps pipe again. In most "regular" cases developers using the GPU for decoding/encoding/transcoding/ existing files, so the data is in system memory and is moved to the GPU and then back again. Video files are already existent and the frames can be consumed as fast as possible, the encoding would look like :

sysmem(cpu)->vram(gpu)->sysmem(cpu)

In my case i have a "non-regular" GPU encoding at work. When i set my live encoding to lets say 25 fps, then only every 40 milliseconds a frame becomes available for encoding. That alone screws the GPU speed advantage already, but the more important problem is the whole delivering process to the GPU. My apps encoding looks this way :

vram(gpu)->sysmem(cpu)->sysmem(cpu)->vram(gpu)->sysmem(cpu)

The source in my encoding is a D3D surface ( IDirect3DSurface9 or ID3D10/11Texture2D) and is copied to system memory so that the CPU or GPU implemented encoders can process them. The 2 sysmem in a row is because of RGBA to BGRA color conversion, from sysmem the bytes get swapped and copied into a media buffer which is also in sysmem. And even though i am using SSE2 and AVX "non-temporal" store functions this step in between of course slows down the whole encoding process by a good margin. Then the data gets moved to the GPU, and after encoding is done it gets moved back to system memory for storage.

After i had inspected this my mind went directly onto the barricades, why all that senseless moving around when the data is already in vram and GPU has full access to it. I could even use the GPU for color conversion and would not have to move the data to system memory. The encoding should look like this:

vram(gpu)->sysmem(cpu)

I read about "Hardware Handshake Sequence" and that one hardware MFT can connect its output to another hardware MFTs input by using MFT_CONNECTED_STREAM_ATTRIBUTE and MFT_CONNECTED_TO_HW_STREAM. As i understand it its not possible for Hardware MFTs to consume data directly from device memory. The closest i could find was this thread where it seems that it is possible with Intel Quick Sync. In my opinion its technically possible with all GPU encoders, be it NVIDIA, AMD or INTEL, the question is how much of extra work outside of media foundation need to be done. Or is it possible to rewrite the MFT implementation these vendors did and make the MFT consuming from hardware memory ?

Another thought of mine is a little bit futuristic scenarion as it relates to "Unified Memory". Theoretically when i have a platform that is using this model like lets say huma from AMD then it should be possible to use MFCreateDXGISurfaceBuffer and send the sample to the hardware MFT. Internaly it would only handle the pointer and the MFT could directly consume without the need to move the data before, all that of course would be only possible if Microsoft would implement the support for these architecture in their memory handling.

Puh walls of text, but i hope that someone has knowledge or maybe just an idea to share. I will invest further into this as it seems to be the fastest encoding of D3D surfaces possible.

regards

coOkie





IP camera frame slowly get delayed up to 1.5 min

$
0
0

Hello,

I have problem: 

I created topology for where I attached IP camera. Topo looks like:

my RTSP source -> MJPEG4 decoder-> RGBtoYUV12->Vp8 encoder

At the beginning the stream I seee low latency video, so far so good.

But leter on I started to see the events on video occured later and later. I turned on time stamp on camera (lucky that it was built in) and I clearly see delay, around 1.5 min between computer clock and timestamp. I guess somehow video frames get accumulated. I checked my VP8 encoder and see that queue is only one frame, which is correct. I dont think problem on source side because it is actually doing nothing, just read RTSP packets from camera and pass it to MPEG4 decoder. I can not check queue size of MPEG4 decoder but I suspect it has some data queued. Is there any settings that could restrict decoder, force it to not make this buffer big? BTW platform is Win7

Thanks

Aleksey



Where is the registry key for the default audio device in Windows 7 x64?

$
0
0

My original post that I had submitted to the scripting guys, because I am trying to finish a script, was declared as needing to be posted to another forum. 

The second time I posted this question, I posted it in the Windows 7 Media forums... and was redirected here by a link provided in the only response provided. Hopefully, I will finally find my answer here; In what seems to be a vista forum... Please, I do not wish to be redirected again.

I need this for an AHK (AutoHotKey) script that I am working on that will toggle between my two enabled audio playback and recording devices.

Going through the sound device GUI is tedious as I swap devices multiple times a day.

Please remember, I need the Win7 x64 registry location for these keys. The answer posted here: Where is the registry key for the default audio device in Windows 7 did not help me as it only answers the question for Windows 7 32 bit.

I thought that it would be these locations below... but after swapping devices, and refreshing the registry, the values never changed.

HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\MMDevices\DefaultDeviceHeuristics\Default\Role_0\Factor_1\Capture

HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\MMDevices\DefaultDeviceHeuristics\Default\Role_0\Factor_1\Render

If I was using 32 bit windows... my AutoHotKey script would look something like this... Unfortunately I am in 64 bit.

; Toggle Sound Devices
^+PgUp::
    RegRead, Device, HKEY_CURRENT_USER, Software\Microsoft\Multimedia\Sound Mapper, Playback
    if(Device = "Realtek HD Audio output")
    {
        RegWrite, REG_SZ, HKEY_CURRENT_USER, Software\Microsoft\Multimedia\Sound Mapper, Playback, Sound Blaster World of Warcraft Wireless Headset
        Device := "SoundBlaster"
    }
    else
    {
        RegWrite, REG_SZ, HKEY_CURRENT_USER, Software\Microsoft\Multimedia\Sound Mapper, Playback, Realtek HD Audio output
        Device := "Realtek"
    }
    ToolTip, % "Sound Device: " Device
    SetTimer, ResetToolTip, 1000
return


; Clear the ToolTip
ReSetToolTip:
    ToolTip
    SetTimer, ReSetToolTip, Off
return

Added later on in the day:

I just ran a program that reports changes to the registry from taking 2 snapshots... I took a snapshot with realtek as my default device... and then a snapshot with my SoundBlaster wireless headset as the default device... 

Nothing changed in the registry. This is quite frustrating.

Is there nowhere in the windows 7 x64 registry where the default audio playback and recording devices are located???

Is there a specific config file that is targeted by mmsys.cpl (the sound devices manager???)


mixing 2 live uncompressed ( wave ) audio streams right before encoding

$
0
0

Hi,

i added microphone recording to my app which made me stumble over a new problem. The recording itself is working perfectly but...now i have another audio stream and the sink writer seems to be the wrong answer for direct encoding. With only one of both audio streams my encoding works, i think i have to build up a chain with a mixer MFT or some multiplexing in between right ?

I read about MFNode and the "Mixer MFT", is the only way to mix 2 audio streams together using a custom MFT ? By the way, is this Mixer MFT capable of mixing two wave sources with different channel counts together ? Also it might be a bit tricky to use this MFT because my sources are all live and i am shoveling bytes directly into media buffers, all that happens at different timings and the samples never have the same duration or size.

I am researching now and will update this thread if i find any solution, if you have any suggestions for the task i would be happy.

regards

coOKie







Transcode API targetting to Windows XP

$
0
0

I've well know that Transcode API  Introduced in Windows 7.

How to target API sample Encoding MP4 which works fine on when built and tested on Windows 8 using VS2010.

how do i target sample application to target with Operating System Windows XP, 

without installing any SDK i.e windows SDK 7.1 in XP.

how to built and execute it.



Arun kumar non ascii

h.264 rtp ts stream and media foundation

$
0
0

Dear community,

I would need some help with the media foundation. I have the following scenario:

A rtp multicast ts stream has to be displayed with the WPF media element. I have found the Media Extension (Win8 Store App) example, that includes many examples for custom scheme handler and so on. My idea was to implement a custom (rtp://) scheme handler, that just "forwards" the byte stream to the decoders or just create a custom media source, that cares about decoding. Is this possible or do I need to implement all the decoding myself. As far as I understood the archirecture of  mf it should be possible to just forward the bytes to some mf component, because if i store the raw bytes on my hard disk and name the file something.mpg, the media element plays the stream.

Thanks in advance and sorry if I'm too naive :) but mf is "somewhat" complex... but the only solution to display the stream in a win8 app.

Michael

MediaFoundation blog

$
0
0

Hello.

MF blog

Last post is 20 Jan 2011...

What's new with MF ?

What about the future of MF ?

Where are you MediaFoundation developpers ? What are you doing ?




Media Foundation onReadSample wrong size of returned sample

$
0
0

Dear All,

I am facing an issue using MediaFoundation on an ACER tablet running Windows 8 32 bit

When enumerating the capture format (as explained in Media Foundation documentation), I got the following supported format for the camera (on the first stream)

  • 0 : MFVideoFormat_NV12, resolution : 448x252, framerate : 30000x1001
  • 1 : MFVideoFormat_YUY2, resolution : 448x252, framerate : 30000x1001
  • 2 : MFVideoFormat_NV12, resolution : 640x360, framerate : 30000x1001
  • 3 : MFVideoFormat_YUY2, resolution : 640x360, framerate : 30000x1001
  • 4 : MFVideoFormat_NV12, resolution : 640x480, framerate : 30000x1001
  • 5 : MFVideoFormat_YUY2, resolution : 640x480, framerate : 30000x1001

I then set the capture format, in this case the one at index 5, using the following function, as described in the example:

hr = pHandler->SetCurrentMediaType(pType);

This function executed without error. The camera should thus be configured to capture in YUY2 with a resolution of 640*480.

In the onReadSample callback, I should receive a sample with a buffer of size :

640*480*sizeof(unsignedchar)*2=614400//YUY2 is encoded on 2 bytes

However, I got a sample with a buffer of size 169344, which correspond to the size of the buffer that the first format should produce (MFVideoFormat_NV12, resolution : 448x252, framerate : 30000x1001). NV12 is coded on 12 bits.

448* 252*sizeof(unsignedchar)* 3/2= 169344 

So my question is, why the callback function does not return samples from the type I have selected? Any advice?

Thanks in advance

Best regards

Where can I find a audio delay plugin for WME9?

$
0
0

I am live broadcasting with WME9 and audio and video are out of sync.

Audio must be delayed 200ms.

Where can I find a plugin (the existing plugins have  a lot of nice effects, but no audio delay)


screen capture codec

$
0
0

I'm learning how to program with media foundation and I gather there is a codec called WindowsMediaScreen9 that I need. The only download I can find is from 2004 and it says it is not compatible with windows 8.

Is there a more up-to-date version available or some newer technology that I should be using?


~~~ PEr aRDUa ad asTrA ~~~ (through adversity to the stars)

Convert image formats from webcam

$
0
0

Hi

I've been using the MFCaptureToFile Sample (http://msdn.microsoft.com/en-us/library/windows/desktop/ee663604(v=vs.85).aspx) to get video frames from a webcam and passing them to a video encoder (WebM which requires YV12 image format). My webcam, however, only provides RGB24 or YUY2 images so I need to convert these to YV12. I can, of course, do it myself with my own conversion algorithms, but I'd prefer to use OS-calls since these can make use of hardware devices to do the conversion.

I've looked at MFTEnumEx to see if it has a way to convert but it gives errors to pretty much any format I insert. He's what I've been trying to do:
    MFT_REGISTER_TYPE_INFO inputFilter = { MFMediaType_Video, MFVideoFormat_RGB24  };
    MFT_REGISTER_TYPE_INFO outputFilter = { MFMediaType_Video, MFVideoFormat_YV12  };
    UINT32 unFlags = MFT_ENUM_FLAG_SYNCMFT | MFT_ENUM_FLAG_LOCALMFT | MFT_ENUM_FLAG_SORTANDFILTER;
    UINT32      cDevices;
    IMFActivate **ppActivaters = NULL;
    hr = MFTEnumEx(MFT_CATEGORY_VIDEO_DECODER, unFlags, &inputFilter, &outputFilter, &ppActivaters, &cDevices);
    assert(cDevices > 0);
    assert(SUCCEEDED(hr));
Doesn't Media Foundation provide a way to convert image formats from video devices? 

-- Bjoern



Kind regards Bjoern


Generic MFT implementation which get inserted for all the Media Types

$
0
0

Hi,

I want to implement a generic MFT which get inserted for each media types while playing

any media files using Windows Media Player.

How can I achieved that?

there are registry settings which specified that which transform should be loaded for which

media type. Modifying that we can achieve that, but need to modify for all the Media types.

I want to write generic MFT which can be get inserted dynamically.

In directshow we had merit concept to achieve the same. How about in Media Foundation?

Best Regards,

Sharad

Get encoder name from SinkWriter or ICodecAPI or IMFTransform

$
0
0

I'm using the SinkWriter in order to encode video using media foundation.

After I initialize the SinkWriter, I would like to get the underlying encoder it uses, and print out its name, so I can see what encoder it uses. (In my case, the encoder is most probably the H.264 Video Encoder included in MF).

I can get references to the encoder's ICodecAPI and IMFTransform interface (using pSinkWriter->GetServiceForStream), but I don't know how to get the encoder's friendly name using those interfaces.

Does anyone know how to get the encoder's friendly name from the sinkwriter? Or from its ICodecAPIor IMFTransform interface?


WMP Plugin Crashes Due To Getting Playlist Item Frequently

$
0
0

Greetings!

Recently I'm trying to write a Windows Media Player plugin.
I have made some different kinds of functions in my plugin and they all can work normally,
but some of them sometimes will cause my plugin crash if I use them frequently.

For example, in the following code, I try to enumerate all playlists in media library, 
and if I do the same thing 500 times every 100 milli-seconds, plugin will crash.

 

CComPtr<IWMPPlaylistCollection> spPlaylistCollection = NULL;
HRESULT hr = m_spCore->get_playlistCollection(&spPlaylistCollection);
if (SUCCEEDED(hr) && spPlaylistCollection)
{
	CComPtr<IWMPPlaylistArray> spPlaylistArray = NULL;
	hr = spPlaylistCollection->getAll(&spPlaylistArray);
	if (SUCCEEDED(hr) && spPlaylistArray)
	{
		long count = 0;
		hr = spPlaylistArray->get_count(&count);
		for (long i=0; i<count; i++)
		{
			CComPtr<IWMPPlaylist> spPlaylist = NULL;
			hr = spPlaylistArray->item(i, &spPlaylist);  // Plugin may crash here
			if (SUCCEEDED(hr) && spPlaylist)
			{
				BSTR bstrName = SysAllocString(L"");
				hr = spPlaylist->get_name(&bstrName);
				UINT len = SysStringLen(bstrName);
				if (SUCCEEDED(hr) && len > 0 && len < _countof(g_sAllPlaylists.wcsName[0]))
				{
					OutputDebugStringW(bstrName);
					wcscpy_s(g_sAllPlaylists.wcsName[g_sAllPlaylists.loCount], bstrName);
					g_sAllPlaylists.loCount++;
				}
				SysFreeString(bstrName);
				bstrName = NULL;

				spPlaylist.Release();
			}
		}
		spPlaylistArray.Release();
	}
	spPlaylistCollection.Release();
}


I found it's always "spPlaylistArray->item" that sometimes cause my plugin crash,
it will not return, and I cannot catch any exception with "_com_error" or "...".

Has anyone had the same problem before? Or is there anything I miss in the above code?
This problem really bothers me. Thanks for your patience!



Get a BITMAPINFO structure by using IMFVideoDisplayControl.GetCurrentImage

$
0
0

Hello,

What I'm trying to do is the following:

I have a video player and I would like to get a BITMAPINFO structure.

Media Foundation offers the possibility to get a BITMAPINFOHEADER by using IMFVideoDisplayControl.GetCurrentImage.

So far I got the Header and put it in the bmiHeader section of the BITMAPINFO.

What I still got to do is populate the bmiColors section of the BITMAPINFO structure.

How can I do this?

My function looks like this:

BITMAPINFO* ToTopoBuilderMF::GetCurrentImageBitMapInfo()
{
  BITMAPINFO*       pbmfh = NULL;
  BITMAPINFOHEADER  Bih;
  BYTE*             pbDIB;
  DWORD             pcbDib;
  LONGLONG          llTimeStamp;

  Bih.biSize = sizeof(BITMAPINFOHEADER);

  if (m_pVideoDisplay)
  {
    m_pVideoDisplay->GetCurrentImage(&Bih,&pbDIB,&pcbDib,&llTimeStamp);
    pbmfh->bmiHeader = Bih;
      
  }

  CoTaskMemFree(pbDIB);
  return pbmfh;
}

Best Regards,

Alin Ionascu

Miracast Detection?

$
0
0

I'm looking for a way to detect Miracast support on the platform. I'm aware that there are two things to be concerned with:

  • A compatible Wi-Fi direct device
  • The presence of a Miracast-enabled graphics driver (exposed as a separate UMDF - specifically, a DLL)

I believe I've figured out how to detect whether the Microsoft Wi-Fi Direct Virtual Adapter is installed and present (although it seems a little convoluted - ideally, I'd query the network adapter for a dev property, but it appears that the virtual Wi-Fi Direct Device only appears after paired [is this similar to Bluetooth, where the adapter only appears after pairing?]). What I'm doing now is looking for PnP devices with {5d624f94-8850-40c3-a3fa-a4fd2080baf3}\vwifimp_wfd in its HardwareIds list. Is there a better way that ensures that the WFD device supports all the capabilities that are required for Miracast (from a transport perspective)?

The Miracast-enabled graphics driver is a little more challenging. Apparently, it looks like the way to go is to query MediaFoundation (I'm aware that the HCK looks at traces generated by the drivers, but I don't know how to do the same thing). Is there another way (perhaps enumerating the graphics driver properties via AQS with a custom property)?

At the end of the day, this requires a leap of faith that the combination of the two means that Miracast is enabled in the OS. Is this always the case (that if WFD works and the Graphics driver is installed, then the system Miracast-capable)?

Thanks in advance,

-Andre

Viewing all 1079 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>